Shifts concept unlearning in diffusion models from fragile keyword-based removal to a distributional framework using contextually diverse prompts.
March 20, 2026
Original Paper
A Concept is More Than a Word: Diversified Unlearning in Text-to-Image Diffusion Models
arXiv · 2603.18767
The Takeaway
Existing unlearning is brittle and easily bypassed by synonymous prompts; this method ensures robust erasure by covering the full semantic distribution of a concept. It significantly improves safety and copyright protection by preventing adversarial recovery of 'unlearned' content.
From the abstract
Concept unlearning has emerged as a promising direction for reducing the risks of harmful content generation in text-to-image diffusion models by selectively erasing undesirable concepts from a model's parameters. Existing approaches typically rely on keywords to identify the target concept to be unlearned. However, we show that this keyword-based formulation is inherently limited: a visual concept is multi-dimensional, can be expressed in diverse textual forms, and often overlap with related co