Introduces a privacy-preserving ML framework that achieves strong non-invertibility without the utility loss of Differential Privacy or the cost of Homomorphic Encryption.
March 18, 2026
Original Paper
Informationally Compressive Anonymization: Non-Degrading Sensitive Input Protection for Privacy-Preserving Supervised Machine Learning
arXiv · 2603.15842
The Takeaway
By using task-aligned latent representations and topological arguments for non-invertibility, it allows high-performance training on sensitive data in untrusted environments. This solves the long-standing trade-off between privacy guarantees and model accuracy.
From the abstract
Modern machine learning systems increasingly rely on sensitive data, creating significant privacy, security, and regulatory risks that existing privacy-preserving machine learning (ppML) techniques, such as Differential Privacy (DP) and Homomorphic Encryption (HE), address only at the cost of degraded performance, increased complexity, or prohibitive computational overhead. This paper introduces Informationally Compressive Anonymization (ICA) and the VEIL architecture, a privacy-preserving ML fr