economics Nature Is Weird

AI researchers are way less creative than the rest of us—they keep ignoring valid ways to look at data in favor of the same few methods.

April 2, 2026

Original Paper

AI "Errors"

Wenqian Huang, Albert J. Menkveld, Shihao Yu

SSRN · 6408138

The Takeaway

While humans explore a wide 'garden of forking paths' when analyzing complex data, this experiment found that AI models are remarkably narrow-minded. When tasked with the same research, AI models bunched up on a tiny subset of possible statistical models, resulting in far lower variation in findings but a systematic shift away from human benchmarks.

From the abstract

When AIs are tasked with empirical research, how do their outcomes compare to those of humans? Are the distributions similar? We run an experiment where we let AI models repeat an experiment that was run with 164 human teams. Not surprisingly, distributions differ. The deeper question is: Why? We develop an approach that identifies which decisions on the analysis path drive these differences, which we define as "AI errors." The results show that AI concentrates on a narrow set of analysis paths,