economics Collision

We should stop trusting human intuition for decisions the moment corruption becomes a risk.

April 15, 2026

Original Paper

Algorithmic Gatekeeping

SSRN · 6567659

The Takeaway

We like the 'human touch' in things like job interviews or loan approvals because humans can see context. But this paper provides a mathematical tipping point for when to fire the humans. The moment the risk of 'human agency distortion' (like bias or bribery) outweighs the value of that 'intuition,' an algorithm is objectively better. It suggests that our love for the 'human element' is often just a mask for allowing systems to remain corrupt. There is a specific, calculable threshold where a 'cold' algorithm is more fair than a person.

From the abstract

When should routing authority in tiered service systems transfer from human gatekeepers to algorithms? The answer hinges on an inseparable bundle. A human gatekeeper who interacts with customers can elicit contextual cues outside the algorithmic data pipeline, but the same unverifiability that makes these cues inaccessible to algorithms makes them impossible to fully contract on, exposing the system to agency distortion. Transferring authority to an algorithm eliminates both. The regime question