An_MinCD-PnP_Learning_2D-3D_Correspondences_with_Approximate_Blind_PnP@ICCV2025@CVF

Total: 1

#1 MinCD-PnP: Learning 2D-3D Correspondences with Approximate Blind PnP [PDF] [Copy] [Kimi] [REL]

Authors: Pei An, Jiaqi Yang, Muyao Peng, You Yang, Qiong Liu, Xiaolin Wu, Liangliang Nan

Image-to-point-cloud (I2P) registration is a fundamental problem in computer vision, focusing on establishing 2D-3D correspondences between an image and a point cloud. Recently, the differentiable perspective-n-point (PnP) has been widely used to supervise I2P registration networks by enforcing projective constraints on 2D-3D correspondences. However, differentiable PnP is highly sensitive to noise and outliers in the predicted correspondences, which hinders the effectiveness of correspondence learning. Inspired by the robustness of blind PnP to noise and outliers in correspondences, we propose an approximate blind PnP-based correspondence learning approach. To mitigate the high computational cost of blind PnP, we reformulate it as a more tractable problem: minimizing the Chamfer distance between learned 2D and 3D keypoints, referred to as MinCD-PnP. To effectively solve MinCD-PnP, we introduce a lightweight multi-task learning module, MinCD-Net, which can be easily integrated into the existing I2P registration architectures. Extensive experiments on 7-Scenes, RGBD-V2, ScanNet, self-collected, and KITTI datasets demonstrate that MinCD-Net outperforms state-of-the-art methods and achieves higher inlier ratio and registration recall in both cross-scene and cross-dataset settings. The source code: https://github.com/anpei96/mincd-pnp-demo.

Subject: ICCV.2025 - Poster