economics Nature Is Weird

Putting a 'human in the loop' can actually make an AI system's decisions less accurate.

April 15, 2026

Original Paper

Algorithmic Advice, Human Review, and Shared Liability

SSRN · 6457618

The Takeaway

We feel safer when a human reviews an AI’s choice, but this 'check and balance' can actually degrade the final result. Because the humans and the AI are worried about liability, they change their behavior in ways that create new errors. The human starts relying on the AI too much, or the AI is 'tuned' to trigger a review only in the weirdest cases, leading to a 'worst of both worlds' outcome. It turns out that 'human oversight' isn't always a safety net; it’s often a noise multiplier that makes the final decision worse than if either had worked alone.

From the abstract

Algorithmic advice is often used not only to guide decisions but also to determine whether cases are escalated for costly review. I study advice design at deployment for an algorithm provider that observes a calibrated internal score and sends advice to a human decision maker who retains final authority, can pay to open review, and shares misclassification losses with the provider. A startup cost of review makes advice part of the escalation rule: optimal advice design targets the posterior beli