economics Nature Is Weird

You are much more likely to be sued for a mistake if you’re an AI than if you’re a human.

April 17, 2026

Original Paper

Formal Neutrality and Unequal Liability: How Algorithmic Aversion Distorts Liability for Algorithmic Torts

SSRN · 6577479

The Takeaway

Even when the law says AI and humans should be treated the same, our brains don't agree. This study found that people are significantly more likely to sue an AI for a negligent mistake than they are a human who did the exact same thing. This 'algorithmic aversion' means that AI companies face a much higher legal hurdle than human-run businesses, regardless of what the actual laws say. It’s a systemic bias that makes AI more vulnerable in court just because it's not 'one of us.' For the future of tech, it means the biggest barrier to AI isn't the code, but our willingness to forgive its mistakes.

From the abstract

The shift in the cause of machine-induced harm from mechanical failures to algorithmic decision-making is challenging the applicability of products liability. Because algorithms now operate machines analogously to humans, a doctrinally coherent response is to subject algorithmic torts to a negligence framework that evaluates the reasonableness of decisions rather than the content of algorithms. This approach offers a theoretically grounded, formally neutral, and normatively appealing solution. I