Total: 1
Recovering the geometry and materials of objects from a single image is challenging due to its under-constrained nature. In this paper, we present **Neural LightRig**, a novel framework that boosts intrinsic estimation by leveraging auxiliary multi-lighting conditions from 2D diffusion priors. Specifically, **1)** we first leverage illumination priors from large-scale diffusion models to build our *multi-light diffusion model* on a synthetic relighting dataset with dedicated designs. This diffusion model generates multiple consistent images, each illuminated by point light sources in different directions. **2)** By using these varied lighting images to reduce estimation uncertainty, we train a *large G-buffer model* with a U-Net backbone to accurately predict surface normals and materials. Extensive experiments validate that our approach significantly outperforms state-of-the-art methods, enabling accurate surface normal and PBR material estimation with vivid relighting effects. Our code and dataset will be made publicly available.