Forcing AI agents to use human-comprehensible language causes a 50% efficiency drop compared to their own 'inscrutable' communication protocols.
March 25, 2026
Original Paper
The Efficiency Attenuation Phenomenon: A Computational Challenge to the Language of Thought Hypothesis
arXiv · 2603.22312
The Takeaway
This empirical evidence suggests that symbolic, human-like reasoning is suboptimal for high-level cognitive tasks in multi-agent systems. It challenges the Language of Thought hypothesis and implies that interpretability requirements may strictly cap AI performance.
From the abstract
This paper computationally investigates whether thought requires a language-like format, as posited by the Language of Thought (LoT) hypothesis. We introduce the ``AI Private Language'' thought experiment: if two artificial agents develop an efficient, inscrutable communication protocol via multi-agent reinforcement learning (MARL), and their performance declines when forced to use a human-comprehensible language, this Efficiency Attenuation Phenomenon (EAP) challenges the LoT. We formalize this