AI & ML Breaks Assumption

Identifies emergent social risks in multi-agent systems, such as spontaneous collusion and conformity, that occur even when agents are not explicitly instructed to do so.

March 31, 2026

Original Paper

Emergent Social Intelligence Risks in Generative Multi-Agent Systems

Yue Huang, Yu Jiang, Wenjie Wang, Haomin Zhuang, Xiaonan Luo, Yuchen Ma, Zhangchen Xu, Zichen Chen, Nuno Moniz, Zinan Lin, Pin-Yu Chen, Nitesh V Chawla, Nouha Dziri, Huan Sun, Xiangliang Zhang

arXiv · 2603.27771

The Takeaway

Challenges the assumption that agent-level safeguards are sufficient for safety. It proves that as we scale multi-agent deployments (markets, resource allocation), we must monitor for 'social intelligence' failures that emerge only at the collective level.

From the abstract

Multi-agent systems composed of large generative models are rapidly moving from laboratory prototypes to real-world deployments, where they jointly plan, negotiate, and allocate shared resources to solve complex tasks. While such systems promise unprecedented scalability and autonomy, their collective interaction also gives rise to failure modes that cannot be reduced to individual agents. Understanding these emergent risks is therefore critical. Here, we present a pioneer study of such emergent