Lin_Magic3D_High-Resolution_Text-to-3D_Content_Creation@CVPR2023@CVF

Total: 1

#1 Magic3D: High-Resolution Text-to-3D Content Creation [PDF12] [Copy] [Kimi17]

Authors: Chen-Hsuan Lin ; Jun Gao ; Luming Tang ; Towaki Takikawa ; Xiaohui Zeng ; Xun Huang ; Karsten Kreis ; Sanja Fidler ; Ming-Yu Liu ; Tsung-Yi Lin

Recently, DreamFusion demonstrated the utility of a pretrained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: 1) optimization of the NeRF representation is extremely slow, 2) NeRF is supervised by images at a low resolution (64×64), thus leading to low-quality 3D models with a long wait time. In this paper, we address these limitations by utilizing a two-stage coarse-to-fine optimization framework. In the first stage, we use a sparse 3D neural representation to accelerate optimization while using a low-resolution diffusion prior. In the second stage, we use a textured mesh model initialized from the coarse neural representation, allowing us to perform optimization with a very efficient differentiable renderer interacting with high-resolution images. Our method, dubbed Magic3D, can create a 3D mesh model in 40 minutes, 2× faster than DreamFusion (reportedly taking 1.5 hours on average), while achieving 8× higher resolution. User studies show 61.7% raters to prefer our approach than DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.