AI & ML Efficiency Breakthrough

Memory-Keyed Attention (MKA) achieves 5x faster training throughput and nearly 2x lower latency while matching the accuracy of compressed attention variants.

March 24, 2026

Original Paper

MKA: Memory-Keyed Attention for Efficient Long-Context Reasoning

Dong Liu, Yanxuan Yu, Ben Lengerich, Ying Nian Wu

arXiv · 2603.20586

The Takeaway

As context windows expand, KV cache management is the primary bottleneck. This hierarchical approach offers a significantly faster alternative to popular compression methods like Multi-Latent Attention (MLA) without the typical quality trade-offs.

From the abstract

As long-context language modeling becomes increasingly important, the cost of maintaining and attending to large Key/Value (KV) caches grows rapidly, becoming a major bottleneck in both training and inference. While prior works such as Multi-Query Attention (MQA) and Multi-Latent Attention (MLA) reduce memory by sharing or compressing KV features, they often trade off representation quality or incur runtime overhead. We propose Memory-Keyed Attention (MKA), a hierarchical attention mechanism tha