AI & ML New Capability

UniQueR reconstructs full 3D scenes (including occluded areas) from unposed images in a single forward pass.

March 25, 2026

Original Paper

UniQueR: Unified Query-based Feedforward 3D Reconstruction

Chensheng Peng, Quentin Herau, Jiezhi Yang, Yichen Xie, Yihan Hu, Wenzhao Zheng, Matthew Strong, Masayoshi Tomizuka, Wei Zhan

arXiv · 2603.22851

The Takeaway

Unlike current feedforward models that are pixel-aligned or limited to visible surfaces, this framework uses sparse 3D queries to infer global structure. It achieves better rendering quality than state-of-the-art dense models while using an order of magnitude fewer primitives.

From the abstract

We present UniQueR, a unified query-based feedforward framework for efficient and accurate 3D reconstruction from unposed images. Existing feedforward models such as DUSt3R, VGGT, and AnySplat typically predict per-pixel point maps or pixel-aligned Gaussians, which remain fundamentally 2.5D and limited to visible surfaces. In contrast, UniQueR formulates reconstruction as a sparse 3D query inference problem. Our model learns a compact set of 3D anchor points that act as explicit geometric querie