AI & ML Practical Magic

A room full of 100 different analysts can now be simulated in seconds to expose how much a scientific result depends on a researcher's bias.

April 25, 2026

Original Paper

Researcher-Induced Estimation Uncertainty at Scale Using Agentic AI

Brett McCully

SSRN · 6623899

The Takeaway

AI agents can now act as stochastic replicators to generate hundreds of valid ways to analyze the same dataset. This reveals how small, subjective choices by humans can completely flip the final conclusion of a study. It provides a scalable way to identify p-hacking and other forms of unconscious researcher bias. Instead of taking one person's word, we can see the entire range of possible outcomes for a single experiment. This technology could restore trust in the scientific method by making transparency mandatory and effortless.

From the abstract

Reported standard errors capture sampling uncertainty conditional on one set of researcher decisions, but defensible alternatives in sample construction and specification can shift estimates substantially. I use repeated, independent AI-agent runs, implemented with large language models (LLMs), as stochastic replicators that receive the same prompt and dataset but generate different research choices. Applying the method to the immigration policy-employment question studied by Huntington-Klein et