Large language models solve complex logic problems more accurately when they are forced to "think" in a non-English language.
April 24, 2026
Original Paper
Language as a Latent Variable for Reasoning Optimization
arXiv · 2604.21593
The Takeaway
Different languages act as structural modulators that change how a model performs internal inference. When a model switches from English to another language, it accesses its knowledge through a different mathematical pathway that might be better suited for reasoning. This suggests that the thought process of an AI is not language-neutral but is deeply tied to the specific grammar and vocabulary of the prompt. Engineers can improve the reliability of AI logic by simply asking the model to process the problem in a more structured language first. This discovery changes the way we view translation and reasoning as two separate capabilities. It opens up a new world of prompting strategies based on linguistic logic rather than just instructions.
From the abstract
As LLMs reduce English-centric bias, a surprising trend emerges: non-English responses sometimes outperform English on reasoning tasks. We hypothesize that language functions as a latent variable that structurally modulates the model's internal inference pathways, rather than merely serving as an output medium. To test this, we conducted a Polyglot Thinking Experiment, in which models were prompted to solve identical problems under language-constrained and language-unconstrained conditions. Resu