Simple image sharpening serves as a surrogate-free, zero-cost preemptive defense against adversarial attacks.
March 27, 2026
Original Paper
Efficient Preemptive Robustification with Image Sharpening
arXiv · 2603.25244
The Takeaway
It challenges the necessity of expensive adversarial training by showing that boosting texture intensity via sharpening can robustify images against perturbations. This is an immediately deployable, human-interpretable defense for vision systems.
From the abstract
Despite their great success, deep neural networks rely on high-dimensional, non-robust representations, making them vulnerable to imperceptible perturbations, even in transfer scenarios. To address this, both training-time defenses (e.g., adversarial training and robust architecture design) and post-attack defenses (e.g., input purification and adversarial detection) have been extensively studied. Recently, a limited body of work has preliminarily explored a pre-attack defense paradigm, termed p