Wu_Amodal3R_Amodal_3D_Reconstruction_from_Occluded_2D_Images@ICCV2025@CVF

Total: 1

#1 Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images [PDF] [Copy] [Kimi] [REL]

Authors: Tianhao Wu, Chuanxia Zheng, Frank Guan, Andrea Vedaldi, Tat-Jen Cham

Most existing image-to-3D models assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional image-to-3D model designed to reconstruct plausible 3D geometry and appearance from partial observations. We extend a "foundation" 3D generator by introducing a visible mask-weighted attention mechanism and an occlusion-aware attention layer that explicitly leverages visible and occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms state-of-the-art methods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction.

Subject: ICCV.2025 - Poster