SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  AI

Perfect prediction is a dangerous trap that makes AI models completely blind to the actual cause of the events they are watching.

AI-generated illustration for: Perfect prediction is a dangerous trap that makes AI models completely blind to the actual cause of the events they are watching.
AI-generated illustration

Neural networks trained for high-accuracy prediction systematically prioritize environment-related patterns over real causal links. This impossibility theorem proves that as models get more complex, they become less likely to understand the true why behind their data. A model might predict a patient's health perfectly by looking at hospital room numbers rather than biological symptoms. This structural limit means that making an AI better at predicting the future can actually make it worse at making decisions. It forces a complete rethink of how we train AI for medicine, law, and high-stakes policy.

Original Paper

The Predictive-Causal Gap: An Impossibility Theorem and Large-Scale Neural Evidence

Kejun Liu

arXiv  ·  2605.05029

We report a systematic failure mode in predictive representation learning. Across 2695 neural network configurations trained to predict linear-Gaussian dynamics, the optimal encoder tracks the environment rather than the system it is meant to model. The mean causal fidelity -- the fraction of encoder sensitivity allocated to system degrees of freedom -- is 0.49, and only 2.5% of configurations exceed 0.70. The failure intensifies with dimension: at N=100, the optimal encoder becomes causally bli