AI coding agents are creating a 'silent maintenance crisis' by ignoring observability and logging.
April 15, 2026
Original Paper
Do AI Coding Agents Log Like Humans? An Empirical Study
arXiv · 2604.09409
The Takeaway
We're focused on whether AI can write code that 'works,' but we're ignoring whether it writes code that can be 'managed.' This empirical study shows that AI agents consistently fail to add necessary logging, even when explicitly told to do so. In fact, human developers are quietly fixing 72.5% of the observability issues AI leaves behind. This 'janitorial labor' is a hidden cost of AI dev-tools that organizations need to start accounting for. Without human intervention, AI-generated codebases quickly become un-debuggable black boxes.
From the abstract
Software logging is essential for maintaining and debugging complex systems, yet it remains unclear how AI coding agents handle this non-functional requirement. While prior work characterizes human logging practices, the behaviors of AI coding agents and the efficacy of natural language instructions in governing them are unexplored. To address this gap, we conduct an empirical study of 4,550 agentic pull requests across 81 open-source repositories. We compare agent logging patterns against human