AI & ML Breaks Assumption

The legally mandated right to be forgotten (unlearning) can be weaponized as an adversarial attack surface to collapse model accuracy.

March 20, 2026

Original Paper

Attack by Unlearning: Unlearning-Induced Adversarial Attacks on Graph Neural Networks

Jiahao Zhang, Yilong Wang, Suhang Wang

arXiv · 2603.18570

The Takeaway

It challenges the assumption that approximate unlearning is a safe privacy-preserving tool by showing that 'poison nodes' can be injected and then deleted to trigger model failure. This forces a rethink of how unlearning protocols are implemented in production systems.

From the abstract

Graph neural networks (GNNs) are widely used for learning from graph-structured data in domains such as social networks, recommender systems, and financial platforms. To comply with privacy regulations like the GDPR, CCPA, and PIPEDA, approximate graph unlearning, which aims to remove the influence of specific data points from trained models without full retraining, has become an increasingly important component of trustworthy graph learning. However, approximate unlearning often incurs subtle p