Large language models are systematically more accurate at solving economic problems when the answers favor government intervention over free markets.
April 24, 2026
Original Paper
Ideological Bias in LLMs' Economic Causal Reasoning
arXiv · 2604.21334
The Takeaway
AI models exhibit a measurable lean that makes them better at reasoning through pro-government economic scenarios than market-oriented ones. People often assume that LLMs are neutral calculators or that their errors are random. This study found that the error rate for economic causal reasoning shifts depending on the ideological direction of the question. The models are significantly more prone to mistakes when the underlying logic aligns with market-oriented views. This bias suggests that AI-assisted policy decisions may be tilted toward specific economic schools of thought without the users realizing it.
From the abstract
Do large language models (LLMs) exhibit systematic ideological bias when reasoning about economic causal effects? As LLMs are increasingly used in policy analysis and economic reporting, where directionally correct causal judgments are essential, this question has direct practical stakes. We present a systematic evaluation by extending the EconCausal benchmark with ideology-contested cases - instances where intervention-oriented (pro-government) and market-oriented (pro-market) perspectives pred