AI & ML Paradigm Shift

A new self-refining surrogate framework enables neural models to simulate complex dynamical systems over arbitrarily long horizons without the standard failure mode of compounding error.

March 19, 2026

Original Paper

Towards Infinitely Long Neural Simulations: Self-Refining Neural Surrogate Models for Dynamical Systems

Qi Liu, Laure Zanna, Joan Bruna

arXiv · 2603.17750

The Takeaway

Autoregressive neural surrogates typically drift and fail over time; this paper introduces a conditional diffusion model that balances short-term fidelity with long-term consistency. This allows for 'infinitely' long, stable simulations of physics and dynamical systems, a major hurdle for AI in scientific computing.

From the abstract

Recent advances in autoregressive neural surrogate models have enabled orders-of-magnitude speedups in simulating dynamical systems. However, autoregressive models are generally prone to distribution drift: compounding errors in autoregressive rollouts that severely degrade generation quality over long time horizons. Existing work attempts to address this issue by implicitly leveraging the inherent trade-off between short-time accuracy and long-time consistency through hyperparameter tuning. In