SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  Biology

A tiny, non-representative group of neurons can trick scientists into thinking two brains are processing information the same way.

Neuroscientists often use similarity metrics to claim that a human brain and an AI model are aligned in how they think. This critique reveals that these metrics can be completely dominated by a very small subset of cells that do not represent the whole system. When researchers say two neural systems are doing the same thing, they might be ignoring the vast majority of the actual global organization. This means many current theories about how AI mimics biological thought could be based on a statistical illusion. Future brain research will need to look past these misleading signals to understand how the entire network actually functions.

Original Paper

Decoding Alignment without Encoding Alignment: A critique of similarity analysis in neuroscience

Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker

arXiv  ·  2605.05907

Decoding approaches are widely used in neuroscience and machine learning to compare stimulus representations across neural systems, such as different brain regions, organisms, and deep learning models. Popular methods include decoding (perceptual) manifolds and alignment metrics such as Representational Similarity Analysis (RSA) and Dynamic Similarity Analysis (DSA), where similarity in decoding representations is interpreted as evidence for similar computation. This paper demonstrates a fundame