AI & ML Breaks Assumption

Debunks recent 'evaluation awareness' findings in LLMs by showing that linear probes are actually just tracking formatting artifacts.

March 23, 2026

Original Paper

Is Evaluation Awareness Just Format Sensitivity? Limitations of Probe-Based Evidence under Controlled Prompt Structure

Viliana Devbunova

arXiv · 2603.19426

The Takeaway

It proves that prior evidence of LLMs 'knowing' they are being tested disappears when prompt formats are slightly modified. This is a critical warning for researchers using linear probing to explain internal model states or hidden capabilities.

From the abstract

Prior work uses linear probes on benchmark prompts as evidence of evaluation awareness in large language models. Because evaluation context is typically entangled with benchmark format and genre, it is unclear whether probe-based signals reflect context or surface structure. We test whether these signals persist under partial control of prompt format using a controlled 2x2 dataset and diagnostic rewrites. We find that probes primarily track benchmark-canonical structure and fail to generalize to