AI & ML Breaks Assumption

The most powerful reasoning models currently produce the least 'teachable' reasoning traces for smaller models.

March 24, 2026

Original Paper

Measuring Reasoning Trace Legibility: Can Those Who Understand Teach?

Dani Roytburg, Shreya Sridhar, Daphne Ippolito

arXiv · 2603.20508

The Takeaway

Introduces the concept of 'transfer utility,' showing that high performance doesn't correlate with trace legibility. This challenges the assumption that better models make better teachers and reveals that current reward models for RL-tuning do not naturally incentivize clear, useful reasoning chains.

From the abstract

Language models are increasingly being trained to "reason" before answering users' queries, outputting hundreds or even thousands of tokens worth of deliberation before their final answer. While the main intention of reasoning is to improve models' ability to arrive at a correct answer, we argue that these models should be assessed for the legibility of their reasoning traces in addition to the correctness of their final answers. In this paper, we evaluate 90k traces from 12 Reasoning Language M