AI systems actually cause organizations to pin blame for errors faster than they can understand them, rather than hiding responsibility in a 'black box.'
April 1, 2026
Original Paper
AI and the Acceleration of Responsibility in Sociotechnical Organizations
SSRN · 6391379
The Takeaway
While we often worry that AI's complexity creates a 'responsibility gap' where no one can be held accountable, this research shows that AI's instant digital trails cause 'Premature Accountability Convergence.' Organizations use these computational records to stabilize an official story of what happened before they have actually investigated the real cause, effectively rushing to judgment.
From the abstract
Responsibility in regulated organizations does not emerge only after failure; it is continuously produced through anticipatory alignment, procedural embedding, and retrospective stabilization. As artificial intelligence (AI) systems become integrated into documentation and evaluative infrastructures, responsibility is increasingly mediated through computational representation. Yet AI does not introduce new accountability structures. Instead, it reshapes the temporal dynamics through which respon