AI & ML Breaks Assumption

Proves that noisy/incorrect labels are destructive to Reinforcement Learning with Verifiable Rewards (RLVR), contradicting recent high-profile claims that noise doesn't matter.

March 18, 2026

Original Paper

Noisy Data is Destructive to Reinforcement Learning with Verifiable Rewards

Yuxuan Zhu, Daniel Kang

arXiv · 2603.16140

The Takeaway

Critical for practitioners scaling RL for math or coding; it demonstrates that current algorithms (like GRPO) cannot overcome poor data quality, making high-fidelity verification essential for performance.

From the abstract

Reinforcement learning with verifiable rewards (RLVR) has driven recent capability advances of large language models across various domains. Recent studies suggest that improved RLVR algorithms allow models to learn effectively from incorrect annotations, achieving performance comparable to learning from clean data. In this work, we show that these findings are invalid because the claimed 100% noisy training data is "contaminated" with clean data. After rectifying the dataset with a rigorous re-