A specific pulse in an AI hidden states reveals whether it is actually performing a calculation or just rambling to look smart.
This research identified a spatiotemporal pattern called StALT that acts as a signature for internal reasoning. Models that are just chain-of-thought yapping do not show this specific rhythmic activity in their layers. It provides a way to verify if an AI is genuinely working through a logic puzzle or just reciting a memorized pattern. This tool could act as a lie detector for the thinking process of LLMs. In the future, we could filter out unreliable AI responses by checking for this thinking pulse before the text is even finished.
Spatiotemporal Hidden-State Dynamics as a Signature of Internal Reasoning in Large Language Models
arXiv · 2605.01853
Large reasoning models (LRMs) generate extended solutions, yet it remains unclear whether these traces reflect substantive internal computation or merely verbosity and overthinking. Although recent hidden-state analyses suggest that internal representations carry correctness-related signals, their coarse aggregations may obscure the token and layer structure underlying reasoning computation. We investigate hidden-state transitions across decoding steps and layers, and identify a distinct spatiot