Pretrained 3D generative models can be repurposed for high-quality part segmentation using less than 1% of the typical labeled data.
March 18, 2026
Original Paper
SegviGen: Repurposing 3D Generative Model for Part Segmentation
arXiv · 2603.16869
The Takeaway
It demonstrates that structured priors in generative models are superior to traditional discriminative features for 3D understanding. This drastically reduces the annotation bottleneck for 3D computer vision tasks.
From the abstract
We introduce SegviGen, a framework that repurposes native 3D generative models for 3D part segmentation. Existing pipelines either lift strong 2D priors into 3D via distillation or multi-view mask aggregation, often suffering from cross-view inconsistency and blurred boundaries, or explore native 3D discriminative segmentation, which typically requires large-scale annotated 3D data and substantial training resources. In contrast, SegviGen leverages the structured priors encoded in pretrained 3D