AI & ML Efficiency Breakthrough

Achieves a 45x reduction in video generation inference latency and 2.5x higher training throughput using an efficient solution-flow framework.

March 31, 2026

Original Paper

EFlow: Fast Few-Step Video Generator Training from Scratch via Efficient Solution Flow

Dogyun Park, Yanyu Li, Sergey Tulyakov, Anil Kag

arXiv · 2603.27086

The Takeaway

Scaling video diffusion is typically bottlenecked by attention complexity and sampling steps; EFlow solves both via Gated Local-Global attention and a new few-step training recipe. This makes from-scratch video model training and high-speed generation accessible with significantly less compute.

From the abstract

Scaling video diffusion transformers is fundamentally bottlenecked by two compounding costs: the expensive quadratic complexity of attention per step, and the iterative sampling steps. In this work, we propose EFlow, an efficient few-step training framework, that tackles these bottlenecks simultaneously. To reduce sampling steps, we build on a solution-flow objective that learns a function mapping a noised state at time t to time s. Making this formulation computationally feasible and high-quali