Human reviewers in AI legal cases frequently reject the correct technical answer because of their own political leanings.
April 26, 2026
Original Paper
Watching the Watchmen: Overseers’ Algorithm Aversion in AI Arbitration
SSRN · 6636918
The Takeaway
The human in the loop is a standard safety feature intended to catch AI errors and ensure fairness in legal disputes. These overseers often ignore the law to strike down AI decisions that conflict with their personal beliefs. This behavior introduces a hidden layer of bias that is harder to track than the AI's own flaws. Instead of acting as a neutral filter, the human component becomes a source of distortion that undermines the legal process. Relying on human oversight as a cure for algorithmic bias might actually be making the system less objective.
From the abstract
<p><i>Artificial intelligence (AI) is rapidly transforming dispute resolution. As AI-native arbitrators and automated award generation become institutionally established, the dominant regulatory and scholarly response has converged on an important safeguard: human-in-the-loop oversight. The EU AI Act, institutional arbitration rules, and judicial practice all require a human decision-maker to review and validate AI-generated awards before they acquire legal force. This consensus rests on an unex