AI & ML Efficiency Breakthrough

Enables instruction-following in low-resource languages by simply merging target language base models with English-instructed models.

March 31, 2026

Original Paper

Merge and Conquer: Instructing Multilingual Models by Adding Target Language Weights

Eneko Valero, Maria Ribalta i Albado, Oscar Sainz, Naiara Perez, German Rigau

arXiv · 2603.28263

The Takeaway

This bypasses the need for expensive language-specific instruction fine-tuning or massive multi-lingual datasets. It demonstrates that model merging can effectively transfer task-following behavior across languages, providing a lightweight path for localizing LLMs.

From the abstract

Large Language Models (LLMs) remain heavily centered on English, with limited performance in low-resource languages. Existing adaptation approaches, such as continual pre-training, demand significant computational resources. In the case of instructed models, high-quality instruction data is also required, both of which are often inaccessible for low-resource language communities. Under these constraints, model merging offers a lightweight alternative, but its potential in low-resource contexts h