AI & ML Paradigm Shift

Continual Representation Learning (CoRe) moves PEFT from weight-level updates to representation-space interventions, solving catastrophic forgetting in dynamic environments.

March 13, 2026

Original Paper

Representation Finetuning for Continual Learning

Haihua Luo, Xuming Ran, Tommi Kärkkäinen, Huiyan Xue, Zhonghua Chen, Qi Xu, Fengyu Cong

arXiv · 2603.11201

The Takeaway

Instead of fine-tuning model weights (which drifts and erases old knowledge), CoRe learns low-rank linear transformations of hidden states. This provides explicit control over representation drift, allowing models to learn new tasks without degrading performance on previously learned ones, a major hurdle for production AI agents.

From the abstract

The world is inherently dynamic, and continual learning aims to enable models to adapt to ever-evolving data streams. While pre-trained models have shown powerful performance in continual learning, they still require finetuning to adapt effectively to downstream tasks. However, prevailing Parameter-Efficient Fine-Tuning (PEFT) methods operate through empirical, black-box optimization at the weight level. These approaches lack explicit control over representation drift, leading to sensitivity to