Li_GARF_Learning_Generalizable_3D_Reassembly_for_Real-World_Fractures@ICCV2025@CVF

Total: 1

#1 GARF: Learning Generalizable 3D Reassembly for Real-World Fractures [PDF] [Copy] [Kimi] [REL]

Authors: Sihang Li, Zeyu Jiang, Grace Chen, Chenyang Xu, Siqi Tan, Xue Wang, Irving Fang, Kristof Zyskowski, Shannon P. McPherron, Radu Iovita, Chen Feng, Jing Zhang

3D reassembly is a challenging spatial intelligence task with broad applications across scientific domains. While large-scale synthetic datasets have fueled promising learning-based approaches, their generalizability to different domains is limited. Critically, it remains uncertain whether models trained on synthetic datasets can generalize to real-world fractures where breakage patterns are more complex. To bridge this gap, we propose \acronym , a generalizable 3D reassembly framework for real-world fractures. \acronym leverages fracture-aware pretraining to learn fracture features from individual fragments, while flow matching enables precise 6-DoF alignments. At inference time, we introduce one-step preassembly, improving robustness to unseen objects and varying numbers of fractures. In collaboration with archaeologists, paleoanthropologists, and ornithologists, we curate \dataset , a diverse dataset for vision and learning communities, featuring real-world fracture types across ceramics, bones, eggshells, and lithics. Comprehensive experiments have demonstrated our approach consistently outperforms state-of-the-art methods on both synthetic and real-world datasets, achieving 82.87% lower rotation error and 25.15% higher part accuracy. This work sheds light on training on synthetic data to advance real-world 3D puzzle solving, showcasing its strong generalization across unseen object shapes and diverse fracture types.

Subject: ICCV.2025 - Poster