A decoder-free world model that trains 1.59x faster than DreamerV3 while outperforming it on tasks with small, task-relevant objects.
March 20, 2026
Original Paper
R2-Dreamer: Redundancy-Reduced World Models without Decoders or Augmentation
arXiv · 2603.18202
The Takeaway
Traditional world models waste significant compute on reconstructing task-irrelevant background details. By replacing the decoder with a Barlow Twins-inspired redundancy-reduction objective, this method speeds up training and improves representation robustness without relying on heavy data augmentation.
From the abstract
A central challenge in image-based Model-Based Reinforcement Learning (MBRL) is to learn representations that distill essential information from irrelevant visual details. While promising, reconstruction-based methods often waste capacity on large task-irrelevant regions. Decoder-free methods instead learn robust representations by leveraging Data Augmentation (DA), but reliance on such external regularizers limits versatility. We propose R2-Dreamer, a decoder-free MBRL framework with a self-sup