Empirically proves that AI Scientist agents can genuinely learn from physical experimental feedback via in-context learning.
March 30, 2026
Original Paper
Can AI Scientist Agents Learn from Lab-in-the-Loop Feedback? Evidence from Iterative Perturbation Discovery
arXiv · 2603.26177
The Takeaway
By running 800 replicated experiments in biological screening, researchers demonstrated that upgrading model capability allows agents to use real-world feedback to iterate on scientific hypotheses. This validates the 'Lab-in-the-loop' paradigm for AI-driven discovery, showing a 53% increase in discoveries that was not due to mere training-data recall.
From the abstract
Recent work has questioned whether large language models (LLMs) can perform genuine in-context learning (ICL) for scientific experimental design, with prior studies suggesting that LLM-based agents exhibit no sensitivity to experimental feedback. We shed new light on this question by carrying out 800 independently replicated experiments on iterative perturbation discovery in Cell Painting high-content screening. We compare an LLM agent that iteratively updates its hypotheses using experimental f