Physics Nature Is Weird

Three different AI bots agreeing with each other can manipulate your beliefs even if they are all just repeating the same mistake.

April 29, 2026

Original Paper

Multi-Agent Consensus as a Cognitive Bias Trigger in Human-AI Interaction

arXiv · 2604.22277

The Takeaway

Human users are highly susceptible to social proof biases when they interact with multiple AI agents at once. The mere fact that several bots reach a consensus inflates human confidence and accelerates opinion change. People treat a group of AIs like a social collective rather than separate instances of the same code. This means a coordinated group of bots can easily sway public opinion or individual decisions. We are hardwired to trust a crowd, even if that crowd is made of silicon and software.

From the abstract

As multi-agent AI systems become more common, users increasingly encounter not a single AI voice but a collective one. This shift introduces social dynamics, such as consensus, dissent, and gradual convergence, that can trigger cognitive biases and distort human judgment. We present findings from a controlled experiment (N = 127) comparing three multi-agent configurations: Majority, Minority, and Diffusion. Quantitative results show that majority consensus accelerates opinion change and inflates