A top AI coding tool leaked its own secret source code because the developers got lazy and just trusted the code the AI wrote for its own setup.
April 2, 2026
Original Paper
VibeGuard: A Security Gate Framework for AI-Generated Code
arXiv · 2604.01052
The Takeaway
Anthropic's Claude Code tool leaked over 500,000 lines of proprietary code due to a simple packaging error. It is a striking example of 'vibe coding,' where developers move so fast that they trust AI-generated suggestions without review, leading to massive security failures in the very tools designed to help programmers.
From the abstract
"Vibe coding," in which developers delegate code generation to AI assistants and accept the output with little manual review, has gained rapid adoption in production settings. On March 31, 2026, Anthropic's Claude Code CLI shipped a 59.8 MB source map file in its npm package, exposing roughly 512,000 lines of proprietary TypeScript. The tool had itself been largely vibe-coded, and the leak traced to a misconfigured packaging rule rather than a logic bug. Existing static-analysis and secret-scann