Individual use of generative AI is creating a social trap that will eventually destroy the quality of the AI itself.
April 24, 2026
Original Paper
Generative artificial intelligence reduces social welfare through model collapse
arXiv · 2604.21853
The Takeaway
Model collapse occurs when AI systems are trained on data produced by other AIs, leading to a rapid decline in information diversity. Each user acting rationally to save time by using AI contributes to a polluted digital commons of synthetic data. This cycle functions like the Tragedy of the Commons, where individual benefits lead to a collective disaster for the entire ecosystem. As the pool of human-generated data shrinks relative to AI output, the models lose their ability to represent reality accurately. Maintaining high-quality AI requires a steady stream of original human thought that our current usage patterns are actively undermining.
From the abstract
Generative artificial intelligence (genAI) is rapidly reshaping how knowledge and culture are produced and consumed. Yet generative models are vulnerable to model collapse: when trained on data generated by earlier versions of themselves, their outputs can lose diversity and accuracy. This creates a social dilemma, because delegating tasks to genAI can be individually beneficial in the short term even as widespread adoption degrades future model performance. Here we develop a parsimonious model