AI & ML Paradigm Challenge

AI's ability to scan code for bugs might actually make open-source software more dangerous by finding vulnerabilities faster than humans can fix them.

April 26, 2026

Original Paper

Manikutty's Audit Shock Effect

SSRN · 6551539

The Takeaway

Transparency has long been the primary security feature of open-source software, but AI is turning that openness into a liability. An audit shock occurs when automated tools uncover thousands of flaws that overwhelm the small teams of human maintainers. This challenges the old rule that given enough eyeballs all bugs are shallow because AI eyeballs are much faster than human ones. Attackers can use the same AI to find and exploit these bugs before the patches are even written. This could destabilize the entire digital infrastructure that the modern world relies on.

From the abstract

Open-source software underpins much of the modern digital economy. The prevailing belief within software engineering communities is that transparency improves reliability and security because publicly accessible code allows for broader scrutiny and faster detection of errors. However, the rapid emergence of generative artificial intelligence capable of analyzing large volumes of code may significantly alter the dynamics of vulnerability discovery. This paper proposes Manikutty's Audit Shock Effe