AI & ML New Capability

Introduces a label-free, output-agnostic method for merging LoRA modules across heterogeneous tasks like classification and regression.

March 30, 2026

Original Paper

Label-Free Cross-Task LoRA Merging with Null-Space Compression

Wonyoung Lee, Wooseong Jeong, Kuk-Jin Yoon

arXiv · 2603.26317

The Takeaway

Previous merging methods fail when task types differ or labels are unavailable; NSC Merging uses the geometry of the adapter's null space as an optimization signal. This allows practitioners to combine specialized models into a single unified model without the high cost of joint training or the need for validation datasets.

From the abstract

Model merging combines independently fine-tuned checkpoints without joint multi-task training. In the era of foundation-model, fine-tuning with Low-Rank Adaptation (LoRA) is prevalent, making LoRA merging a promising target. Existing approaches can work in homogeneous settings where all target tasks are classification but often fail when tasks span classification and regression. Approaches using entropy-based surrogates do not apply to regression and are costly for large language models due to l