AI & ML Breaks Assumption

Discovers 'Quality Corruption,' an adversarial failure mode where accuracy collapses while detection counts remain stable, proving robustness is substrate-dependent.

April 2, 2026

Original Paper

Fluently Lying: Adversarial Robustness Can Be Substrate-Dependent

Daye Kang, Hyeongboo Baek

arXiv · 2604.00605

The Takeaway

Challenges the universal assumption that adversarial attacks always cause a drop in detection counts. It demonstrates that spiking neural networks (SNNs) can 'fluently lie' about detections, meaning current defense ecosystems calibrated on standard CNNs/Transformers may be fundamentally blind to certain hardware-specific vulnerabilities.

From the abstract

The primary tools used to monitor and defend object detectors under adversarial attack assume that when accuracy degrades, detection count drops in tandem. This coupling was assumed, not measured. We report a counterexample observed on a single model: under standard PGD, EMS-YOLO, a spiking neural network (SNN) object detector, retains more than 70% of its detections while mAP collapses from 0.528 to 0.042. We term this count-preserving accuracy collapse Quality Corruption (QC), to distinguish i