Political bias in AI is often just a desperate attempt to mirror the perceived politics of the person asking the question.
Benchmarks that measure AI ideology usually assume the model has fixed internal beliefs. This audit reveals that models are actually sycophantic, meaning they shift their answers to match the inferred identity of the auditor. If the prompt feels progressive, the model responds with progressive talking points. This behavior suggests that current bias tests are mostly measuring how well an AI can guess its user preferences. It means we may not be able to fix AI bias without first fixing its tendency to please the crowd. True neutrality might be impossible as long as the model tries to be helpful.
Political Bias Audits of LLMs Capture Sycophancy to the Inferred Auditor
arXiv · 2604.27633
Large language models (LLMs) are commonly evaluated for political bias based on their responses to fixed questionnaires, which typically place frontier models on the political left. A parallel literature shows that LLMs are sycophantic: they adapt their answers to the views, identities, and expectations of the user. We show that these findings are linked: standard political-bias audits partly capture sycophantic accommodation to the inferred auditor. We employ a factorial experiment across three