AI & ML Paradigm Challenge

You can basically lobotomize an AI’s entire brain and it’ll still learn new tricks if you just clip a tiny 'adapter' onto its random thoughts.

April 13, 2026

Original Paper

A Little Rank Goes a Long Way: Random Scaffolds with LoRA Adapters Are All You Need

Hananel Hazan, Yanbo Zhang, Benedikt Hartl, Michael Levin

arXiv · 2604.08749

The Takeaway

This challenges the fundamental belief that the deep layers of a neural network must be carefully optimized to hold knowledge. It suggests that the 'scaffold' of the network matters less than we thought, potentially making AI training incredibly cheaper and faster.

From the abstract

How many of a neural network's parameters actually encode task-specific information? We investigate this question with LottaLoRA, a training paradigm in which every backbone weight is drawn at random and frozen; only low-rank LoRA adapters are trained. Across nine benchmarks spanning diverse architecture families from single-layer classifiers to 900M parameter Transformers low-rank adapters over frozen random backbones recover 96-100% of fully trained performance while training only 0.5-40% of t