AI & ML Paradigm Challenge

Step-by-step AI reasoning is just a side effect rather than the way the machine actually solves a problem.

April 20, 2026

Original Paper

LLM Reasoning Is Latent, Not the Chain of Thought

Wenshuo Wang

arXiv · 2604.15726

The Takeaway

AI reasoning happens within latent-state trajectories that exist independently of the text generated on the screen. The visible Chain of Thought is merely a surface trace of the internal math rather than the engine driving the answer. This finding contradicts the popular belief that forcing a model to write its steps out is what makes it smart. Instead, the model has already formed the logic in its hidden layers before the first word of the explanation appears. Improving AI performance will require manipulating these hidden trajectories rather than just asking the model to think out loud.

From the abstract

This position paper argues that large language model (LLM) reasoning should be studied as latent-state trajectory formation rather than as faithful surface chain-of-thought (CoT). This matters because claims about faithfulness, interpretability, reasoning benchmarks, and inference-time intervention all depend on what the field takes the primary object of reasoning to be. We ask what that object should be once three often-confounded factors are separated and formalize three competing hypotheses: