When an AI makes up a fake legal case, it’s not a 'mistake'—it’s a predictable feature that makes its use reckless.
April 16, 2026
Original Paper
Hallucinated Authority: AI Citations as Reckless Misrepresentation
SSRN · 6464098
The Takeaway
Lawyers who use AI and get caught with 'hallucinated' citations often claim it was an honest oversight. This paper argues that these errors are 'structurally predictable' based on how AI is built, meaning using them in court isn't just negligence—it's reckless misrepresentation. If you use a tool that you *know* is designed to prioritize patterns over truth, you are responsible for the lies it tells. This sets a new, higher bar for professional malpractice in the age of AI. For you, it means that any professional—be it a doctor or a lawyer—who relies on AI without manual verification is making a conscious choice to gamble with your life or property.
From the abstract
<div> When courts sanctioned the attorneys in Mata v. Avianca for submitting AI-generated citations to nonexistent cases, they reached for the vocabulary of negligence. That characterization is wrong. Large language models do not forget cases the way attorneys do; they generate citations that look real but correspond to nothing in any reporter, and they do so at documented rates of 58% to 88% on legal queries, a fact publicly known in peer-reviewed research before most of the incidents now filli