Total: 1
Recent advances in deep learning have enabled highly accurate six-degree-of-freedom (6DoF) object pose estimation, leading to its widespread use in real-world applications such as robotics, augmented reality, virtual reality, and autonomous systems. However, backdoor attacks pose a major security risk to deep learning models. By injecting malicious triggers into training data, an attacker can cause a model to perform normally on benign inputs but behave incorrectly under specific conditions. While most research on backdoor attacks has focused on 2D vision tasks, their impact on 6DoF pose estimation remains largely unexplored. Furthermore, unlike traditional backdoors that only change the object class, backdoors against 6DoF pose estimation must additionally control continuous pose parameters, such as translation and rotation, making existing 2D backdoor attack methods not directly applicable to this setting. To address this gap, we propose a novel backdoor attack framework (6DAttack) that exposes vulnerabilities in 6DoF pose estimation. 6DAttack uses synthetic and real 3D objects of varying shapes as triggers and assigns target poses to induce controlled erroneous pose outputs while maintaining normal behavior on clean inputs. We evaluated this attack on multiple models (including PVNet, DenseFusion, and PoseDiffusion) and datasets (including LINEMOD, YCB-Video, and CO3D). Experimental results demonstrate that 6DAttack achieves extremely high attack success rates (ASRs) without compromising performance on legitimate tasks. Across various models and objects, the backdoored models achieve up to 100% ADD accuracy on clean data, while also achieving 100% ASR under trigger conditions. The accuracy of controlled erroneous pose output is also extremely high, with triggered samples achieving 97.70% ADD-P. These results demonstrate that the backdoor can be reliably implanted and activated, achieving a high ASR under trigger conditions while maintaining a negligible impact on benign data. Furthermore, we evaluate a representative defense and show that it remains ineffective under 6DAttack. Overall, our findings reveal a potentially serious and previously underexplored threat to modern 6DoF pose estimation models.