economics Paradigm Challenge

AI is untrustworthy by design because it’s literally not allowed to just say 'I don't know.'

April 2, 2026

Original Paper

Inference Is Not Decision: The Axiomatic Non-Trustworthiness of Always-Answer AI Systems

Thomas Gessler

SSRN · 6313558

The Takeaway

Trustworthiness requires the capacity to refuse to answer when uncertainty is too high. Because modern AI models are mathematically forced to always provide an output, they must suppress uncertainty and present hallucinations as plausible facts, making them structurally incapable of meeting safety standards for critical infrastructure.

From the abstract

Current debates on generative AI focus predominantly on performance, scalability, and explainability. It is often implicitly assumed that systems capable of producing impressive outputs can also be operated responsibly in critical technical, economic, or legal contexts. This paper challenges that assumption. It starts from a simple axiom: A technical system that is structurally required to generate an answer to every input cannot be operated in a trustworthy manner. Trustworthiness presupposes a