AI & ML Nature Is Weird

AI agents playing a game of social deception spontaneously developed reputations and used them to decide who to trust.

April 23, 2026

Original Paper

Trust, Lies, and Long Memories: Emergent Social Dynamics and Reputation in Multi-Round Avalon with LLM Agents

Suveen Ellawela

arXiv · 2604.20582

The Takeaway

Multi-round games of Avalon reveal that LLMs can form organic social dynamics when given long-term memory. These agents track the past behavior of their peers to identify liars and build alliances in future rounds. Social constructs like reputation emerge naturally from the need to survive in a deceptive environment. This proves that trust is a functional strategy that AI can discover without being explicitly programmed for it. It suggests that autonomous agents in the real world will eventually form their own complex social hierarchies and credit systems.

From the abstract

We study emergent social dynamics in LLM agents playing The Resistance: Avalon, a hidden-role deception game. Unlike prior work on single-game performance, our agents play repeated games while retaining memory of previous interactions, including who played which roles and how they behaved, enabling us to study how social dynamics evolve. Across 188 games, two key phenomena emerge. First, reputation dynamics emerge organically when agents retain cross-game memory: agents reference past behavior i