AI & ML Breaks Assumption

ARES demonstrates high-fidelity data reconstruction from large Federated Learning batches without requiring any architectural modifications to the model.

March 19, 2026

Original Paper

ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery

Zirui Gong, Leo Yu Zhang, Yanjun Zhang, Viet Vo, Tianqing Zhu, Shirui Pan, Cong Wang

arXiv · 2603.17623

The Takeaway

It challenges the assumption that large batch sizes and standard architectures provide privacy in FL. By formulating gradient inversion as a sparse recovery task using Lasso, it proves that sensitive training data can be leaked in practical, real-world deployment settings.

From the abstract

Federated Learning (FL) enables collaborative model training by sharing model updates instead of raw data, aiming to protect user privacy. However, recent studies reveal that these shared updates can inadvertently leak sensitive training data through gradient inversion attacks (GIAs). Among them, active GIAs are particularly powerful, enabling high-fidelity reconstruction of individual samples even under large batch sizes. Nevertheless, existing approaches often require architectural modificatio