Releases the first large-scale family of learned sparse retrieval (LSR) models specialized for code (up to 8B parameters).
March 24, 2026
Original Paper
On the Challenges and Opportunities of Learned Sparse Retrieval for Code
arXiv · 2603.22008
The Takeaway
It democratizes high-performance code retrieval, achieving state-of-the-art results on MTEB Code benchmarks. The model enables sub-millisecond retrieval on million-document collections, bridging the gap between lexical and semantic search for developer tools.
From the abstract
Retrieval over large codebases is a key component of modern LLM-based software engineering systems. Existing approaches predominantly rely on dense embedding models, while learned sparse retrieval (LSR) remains largely unexplored for code. However, applying sparse retrieval to code is challenging due to subword fragmentation, semantic gaps between natural-language queries and code, diversity of programming languages and sub-tasks, and the length of code documents, which can harm sparsity and lat