AI & ML New Capability

Proposes a parameter-efficient LLM adaptation method that enables rapid specialization on non-stationary streams while preventing catastrophic forgetting.

April 2, 2026

Original Paper

Learning from Many and Adapting to the Unknown in Open-set Test Streams

Xiao Zhang, Juntao Lyu, Tianyu Hu, Qianchuan Zhao, Huimin Ma

arXiv · 2604.00533

The Takeaway

The SyCo method allows LLMs to adapt to evolving tasks in real-time deployment without losing their base capabilities. This is critical for agents operating in open-set environments where task distributions shift continually.

From the abstract

Large Language Models (LLMs) generalize across tasks via reusable representations and flexible reasoning, yet remain brittle in real deployment under evolving tasks and continual distribution shift. A common approach is Test-Time Adaptation (TTA), existing ones of which updates models with hand-designed unsupervised objectives over the full parameter space and mostly overlook preserving shared source knowledge and the reliability of adaptation signals. Drawing on molecular signaling cascades of