AI & ML Efficiency Breakthrough

AIMER provides a calibration-free criterion for expert pruning in MoE models that matches state-of-the-art performance in seconds.

March 20, 2026

Original Paper

AIMER: Calibration-Free Task-Agnostic MoE Pruning

Zongfang Liu, Shengkun Tang, Yifan Shen, Huan Wang, Xin Yuan

arXiv · 2603.18492

The Takeaway

MoE models are expensive to store; traditional pruning requires massive calibration datasets to determine expert importance. AIMER uses an absolute-mean-over-RMS metric to rank experts without any data, making MoE compression instant and task-agnostic.

From the abstract

Mixture-of-Experts (MoE) language models increase parameter capacity without proportional per-token compute, but the deployment still requires storing all experts, making expert pruning important for reducing memory and serving overhead. Existing task-agnostic expert pruning methods are typically calibration-dependent: they estimate expert importance from routing or activation statistics on a calibration set, which makes pruning outcomes sensitive to the choice of calibration set and adds substa