AI & ML Nature Is Weird

A dinky AI can keep up with a giant model just by whisper-trading ten tiny bits of information.

April 6, 2026

Original Paper

Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

Roy Rinberg, Annabelle Michael Carrell, Simon Henniger, Nicholas Carlini, Keri Warr

arXiv · 2604.02343

The Takeaway

It demonstrates that high-level model capabilities can be transmitted through an incredibly narrow channel rather than gigabytes of text. This suggests that 'distilling' intelligence may be far more efficient than previously imagined.

From the abstract

We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compression ratios of approximately 0.03, a 2x improvemen