AI & ML Nature Is Weird

Transformers actually suffer from the same 'forgetting' and interference bugs as the human brain, despite having perfect digital memory.

April 14, 2026

Original Paper

Human-like Working Memory Interference in Large Language Models

Hua-Dong Xiong, Li Ji-An, Jiaqi Huang, Robert C. Wilson, Kwonjoon Lee, Xue-Xin Wei

arXiv · 2604.09670

The Takeaway

Even with mathematical access to all context, LLMs encode items in entangled representations that require active suppression to recall correctly. This mimics biological cognitive limitations, suggesting that interference is an inherent property of learning-based attention systems rather than a hardware flaw.

From the abstract

Intelligent systems must maintain and manipulate task-relevant information online to adapt to dynamic environments and changing goals. This capacity, known as working memory, is fundamental to human reasoning and intelligence. Despite having on the order of 100 billion neurons, both biological and artificial systems exhibit limitations in working memory. This raises a key question: why do large language models (LLMs) show such limitations, given that transformers have full access to prior contex