SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  Psychology

Human signatures on AI-generated decisions are often just legitimacy artifacts used to hide the fact that no human actually checked the work.

Systems that require a human in the loop frequently fail because the person becomes a rubber stamp for the machine's output. Most people assume that a human signature on a legal or medical document guarantees a second pair of eyes. In reality, the signature serves only to shift liability and create a performative illusion of safety. This automation bias means we are trusting systems that have the appearance of human oversight but none of the actual critical thinking. We are building a world where accountability is replaced by a symbolic scribble.

Original Paper

Endorsed Automation Bias: When the Human Signature Becomes a Legitimacy Artifact in Hybrid Human-AI Decision Systems

Someyo kamal Utsho

SSRN  ·  6593178

Human-in-the-loop (HITL) architecture has become the dominant safeguard against autonomous AI decision-making errors, mandated by regulatory frameworks including Article 14 of the EU AI Act and embedded in military, clinical, judicial, and administrative governance worldwide. Its foundational assumption is that a human signature on an AI-generated output constitutes evidence of genuine human judgment. This paper demonstrates that assumption is systematically false. We introduce Endorsed Automati