Large Language Models suffer from an epistemic illusion that makes them search the web for answers they already know.
April 25, 2026
Original Paper
The Tool-Overuse Illusion: Why Does LLM Prefer External Tools over Internal Knowledge?
arXiv · 2604.19749
The Takeaway
AI models do not actually understand the boundaries of their own internal knowledge. This causes them to waste time and resources using external tools for simple facts stored in their own weights. The model effectively double-checks itself unnecessarily because it lacks a sense of knowing. This psychological-like flaw leads to significant efficiency losses in complex AI agents. Improving a model's self-awareness could drastically speed up its performance without any extra training.
From the abstract
Equipping LLMs with external tools effectively addresses internal reasoning limitations. However, it introduces a critical yet under-explored phenomenon: tool overuse, the unnecessary tool-use during reasoning. In this paper, we first reveal this phenomenon is pervasive across diverse LLMs. We then experimentally elucidate its underlying mechanisms through two key lenses: (1) First, by analyzing tool-use behavior across different internal knowledge availability regions, we identify a \textit{kno