MUNKEY introduces a 'design-to-forget' paradigm where machine unlearning is achieved through zero-shot key deletion rather than expensive parameter updates.
March 17, 2026
Original Paper
Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion
arXiv · 2603.15033
The Takeaway
Current unlearning methods are post-hoc and require costly retraining or fine-tuning. By decoupling memorization into a memory-augmented transformer architecture, this model enables instant, verifiable forgetting of specific training instances without touching model weights or needing the original data.
From the abstract
Machine unlearning is rapidly becoming a practical requirement, driven by privacy regulations, data errors, and the need to remove harmful or corrupted training samples. Despite this, most existing methods tackle the problem purely from a post-hoc perspective. They attempt to erase the influence of targeted training samples through parameter updates that typically require access to the full training data. This creates a mismatch with real deployment scenarios where unlearning requests can be ant