AI & ML New Capability

DreamLite enables sub-second 1024x1024 image generation and editing on mobile devices using a unified 0.39B parameter model.

March 31, 2026

Original Paper

DreamLite: A Lightweight On-Device Unified Model for Image Generation and Editing

Kailai Feng, Yuxiang Wei, Bo Chen, Yang Pan, Hu Ye, Songwei Liu, Chenqian Yan, Yuan Gao

arXiv · 2603.28713

The Takeaway

This is the first on-device model to unify high-resolution text-to-image generation and text-guided editing in a single lightweight network. It bridges the gap between server-side model capabilities and the strict latency/memory constraints of edge deployment on smartphones.

From the abstract

Diffusion models have made significant progress in both text-to-image (T2I) generation and text-guided image editing. However, these models are typically built with billions of parameters, leading to high latency and increased deployment challenges. While on-device diffusion models improve efficiency, they largely focus on T2I generation and lack support for image editing. In this paper, we propose DreamLite, a compact unified on-device diffusion model (0.39B) that supports both T2I generation a