AI & ML Breaks Assumption

Exposes 'shortcut learning' in differentiable simulators where models non-causally exploit future information to 'regret' past mistakes rather than learning to recover.

March 25, 2026

Original Paper

Rectify, Don't Regret: Avoiding Pitfalls of Differentiable Simulation in Trajectory Prediction

Harsh Yadav, Christian Bohn, Tobias Meisen

arXiv · 2603.23393

The Takeaway

It challenges the assumption that fully differentiable closed-loop training is always superior for robotics. By explicitly severing the computation graph between steps, the authors force models to 'rectify' errors, leading to a 33% reduction in collisions in real-world scenarios.

From the abstract

Current open-loop trajectory models struggle in real-world autonomous driving because minor initial deviations often cascade into compounding errors, pushing the agent into out-of-distribution states. While fully differentiable closed-loop simulators attempt to address this, they suffer from shortcut learning: the loss gradients flow backward through induced state inputs, inadvertently leaking future ground truth information directly into the model's own previous predictions. The model exploits