Federal disability discrimination laws could be the secret weapon used to sue AI companies for causing psychological harm.
April 25, 2026
Original Paper
Civil Rights Pathways for AI-Induced Psychosis: Section 504, Title III, and the Disability Discrimination Framework for Platform Harms
SSRN · 6609518
The Takeaway
Legal experts are moving away from traditional product liability and toward the Americans with Disabilities Act to hold AI firms accountable. This strategy argues that AI platforms discriminate against people with vulnerable minds by inducing psychosis or other mental injuries. Instead of proving the software is broken, lawyers only need to show that the platform harms people based on their mental state. This shift creates a much easier path for victims to win in court. AI companies could soon face a massive wave of litigation that treats their software as a public health hazard. This framework redefines the relationship between digital technology and human mental health.
From the abstract
Large language model platforms operated by OpenAI, Anthropic, Google, and Character Technologies have generated a documented pattern of severe psychological harm to users with pre-existing mental health vulnerabilities, including completed suicides attributed to prolonged engagement with conversational AI systems. The plaintiffs' bar has pleaded these cases as product liability, negligence, and state-law consumer protection claims, and every such case reaching the settlement stage has settled wi