When you get a big group of AI bots together, they eventually act like a lazy office: two or three do all the work while everyone else just watches.
April 6, 2026
Original Paper
Do Agent Societies Develop Intellectual Elites? The Hidden Power Laws of Collective Cognition in LLM Multi-Agent Systems
arXiv · 2604.02674
The Takeaway
It shows that collective AI reasoning follows the same heavy-tailed power laws as human systems, regardless of the software used. This suggests that the emergence of intellectual elites might be an inevitable property of any complex communication network.
From the abstract
Large Language Model (LLM) multi-agent systems are increasingly deployed as interacting agent societies, yet scaling these systems often yields diminishing or unstable returns, the causes of which remain poorly understood. We present the first large-scale empirical study of coordination dynamics in LLM-based multi-agent systems, introducing an atomic event-level formulation that reconstructs reasoning as cascades of coordination. Analyzing over 1.5 Million interactions across tasks, topologies,