Our maps of the expansion of the universe are vulnerable to 'optical illusions' that can double AI prediction errors.
April 15, 2026
Original Paper
Adversarial Attacks on Machine Learning Based Photometric Redshift Estimation
SSRN · 6573477
The Takeaway
Researchers found that targeting the 'Balmer break' in galaxy spectra with specific adversarial perturbations can fundamentally trick AI-guided cosmic maps. Previously, these maps were considered robust scientific outputs, but this shows they are vulnerable to the same 'noise' issues as image classifiers. The error rate in distance calculation doubled under these targeted attacks. This means that astronomical data pipelines need adversarial hardening just like self-driving cars do. It forces a rethink of how we validate AI findings in the natural sciences.
From the abstract
The success of next-generation survey missions, including LSST and Euclid, depends on the precision of photo metric redshift (photo-z) estimation. Although high-performance architectures like Multi-Layer Perceptrons (MLPs) and XGBoost achieve remarkable accuracy, their resilience against adversarial perturbations still remains a blind spot for cosmological pipelines. This paper conducts an adversarial test on these models by using a selective sample of 100,000 galaxies from the SDSS DR17 (Sloan