SeriesFusion
Science, curated & edited by AI
Nature Is Weird  /  AI

The physical shape of a model's internal mathematical space reveals exactly how it understands the rules of chess.

Geometric curvature in token embeddings directly correlates with a language model's internal world model. Researchers found that concepts like board regions and piece importance are clustered by the bending of mathematical space rather than just raw numbers. This means the meaning of a concept is physically encoded in the geometry of the AI's mind. Understanding this geometry allows us to map out what a model actually knows about a subject with high precision. It provides a new way to audit AI systems by looking at the physical structure of their internal representations.

Original Paper

A geometric relation of the error introduced by sampling a language model's output distribution to its internal state

Albert F. Modenbach

arXiv  ·  2605.04899

GPT-style language models are sensitive to single-token changes at generation points where the predicted probability distribution is spread across multiple tokens. Viewing this sensitivity as a geometric property, we derive an $\mathfrak{so}(n)$-valued 1-form that depends only on the geometry of the token embeddings. Despite this purely geometric origin, we show that its curvature is semantically meaningful: On chess reasoning tasks, the curvature couples to the world model of an off-the-shelf i