Government AI summaries of public opinions are actually worse at including critical voices than a random selection of participant comments.
April 23, 2026
Original Paper
Participatory provenance as representational auditing for AI-mediated public consultation
arXiv · 2604.20711
The Takeaway
AI-mediated summaries of public consultations systematically exclude dissenting and skeptical voices. While officials use these tools to streamline feedback, the technology creates a false sense of consensus by silencing the loudest critics. Official government summaries perform significantly worse than a baseline of random participant comments. This silencing effect prevents policymakers from seeing the true level of opposition to new laws or projects. Public trust in democratic consultation could collapse if people realize their specific objections are being erased by a machine.
From the abstract
Artificial intelligence is increasingly deployed to synthesize large-scale public input in policy consultations and participatory processes. Yet no formal framework exists for auditing whether these summaries faithfully represent the source population, an accountability gap that existing approaches to AI explainability, grounding and hallucination detection do not address because they focus on output quality rather than input fidelity. Here, participatory provenance is introduced: a measurement