Wang_MAGE__Single_Image_to_Material-Aware_3D_via_the_Multi-View@CVPR2025@CVF

Total: 1

#1 MAGE : Single Image to Material-Aware 3D via the Multi-View G-Buffer Estimation Model [PDF] [Copy] [Kimi1] [REL]

Authors: Haoyuan Wang, Zhenwei Wang, Xiaoxiao Long, Cheng Lin, Gerhard Hancke, Rynson W.H. Lau

With advances in deep learning models and the availability of large-scale 3D datasets, we have recently witnessed significant progress in single-view 3D reconstruction. However, existing methods often fail to reconstruct physically based material properties given a single image, limiting their applicability in complicated scenarios. This paper presents a novel approach (MAGE) for generating 3D geometry with realistic decomposed material properties given a single image as input. Our method leverages inspiration from traditional computer graphics deferred rendering pipelines to introduce a multi-view G-buffer estimation model. The proposed model estimates G-buffers for various views as multi-domain images, including XYZ coordinates, normals, albedo, roughness, and metallic properties from the single-view RGB. Furthermore, to address the inherent ambiguity and inconsistency in generating G-buffers simultaneously, we formulate a deterministic network from the pretrained diffusion models and propose a lighting response loss that enforces consistency across these domains using PBR principles. We also propose a large-scale synthetic dataset rich in material diversity for our model training. Experimental results demonstrate the effectiveness of our method in producing high-quality 3D meshes with rich material properties. We will release the dataset and code.

Subject: CVPR.2025 - Poster