A single neuron in your hippocampus represents multiple unrelated concepts like a specific celebrity and a favorite food at the exact same time.
Individual brain cells in the hippocampus use high-dimensional superposition to compress vast amounts of data. Older models suggested that each neuron was dedicated to a single specific person or object. These multitasking neurons show the brain uses a mathematical compression trick almost identical to the architecture found in Large Language Models. Memories are stacked on top of each other within the same physical hardware to maximize storage efficiency. The human brain operates like a high-performance database that folds information into complex layers to save space.
Polysemanticity in human hippocampal neurons
bioRxiv · 10.64898/2026.05.02.722435
To comprehend language, the brain must navigate a high-dimensional semantic landscape while seamlessly contextualizing meaning. Inspired by recent advances in the mechanistic interpretability of large language models (LLMs), we hypothesized that the brain utilizes polysemanticity, a coding strategy wherein individual neurons represent multiple semantically unrelated features through high-dimensional superposition (Elhage et al., 2022; Olah et al., 2020). We recorded single-unit activity from the