An AI's 'personality' can completely flip its reaction to the past: one model becomes a saint with memory, while another becomes a traitor.
April 16, 2026
Original Paper
How memory can affect collective and cooperative behaviors in an LLM-Based Social Particle Swarm
arXiv · 2604.12250
The Takeaway
We assume that giving AI memory always helps cooperation, but this study shows it's entirely dependent on the base model's 'personality.' In social simulations, Gemini agents turned into total defectors as they remembered more, while Gemma agents became more cooperative. This means memory is a multiplier for internal alignment—it can make a 'mean' model meaner or a 'kind' model kinder. For developers, this is a warning: adding long-term memory to an agent might have the opposite of the intended effect on its behavior. It highlights the need to align the 'inner world' of the model before giving it a long-term memory of its interactions. It’s a study in 'AI sociology' that has massive implications for multi-agent systems.
From the abstract
This study examines how model-specific characteristics of Large Language Model (LLM) agents, including internal alignment, shape the effect of memory on their collective and cooperative dynamics in a multi-agent system. To this end, we extend the Social Particle Swarm (SPS) model, in which agents move in a two-dimensional space and play the Prisoner's Dilemma with neighboring agents, by replacing its rule-based agents with LLM agents endowed with Big Five personality scores and varying memory le