Chen_Hyperdimensional_Uncertainty_Quantification_for_Multimodal_Uncertainty_Fusion_in_Autonomous_Vehicles@CVPR2025@CVF

Total: 1

#1 Hyperdimensional Uncertainty Quantification for Multimodal Uncertainty Fusion in Autonomous Vehicles Perception [PDF2] [Copy] [Kimi3] [REL]

Authors: Luke Chen, Junyao Wang, Trier Mortlock, Pramod Khargonekar, Mohammad Abdullah Al Faruque

Uncertainty Quantification (UQ) is crucial for ensuring the reliability of machine learning models deployed in real-world autonomous systems. However, existing approaches typically quantify task-level output prediction uncertainty without considering epistemic uncertainty at the multimodal feature fusion level, leading to sub-optimal outcomes.Additionally, popular uncertainty quantification methods, e.g., Bayesian approximations, remain challenging to deploy in practice due to high computational costs in training and inference. In this paper, we propose $HyperDUM$, a novel deterministic uncertainty method (DUM) that efficiently quantifies feature-level epistemic uncertainty by leveraging hyperdimensional computing.Our method captures the channel and spatial uncertainties through channel and patch -wise projection and bundling techniques respectively.Multimodal sensor features are then adaptively weighted to mitigate uncertainty propagation and improve feature fusion.Our evaluations show that $HyperDUM$ on average outperforms the state-of-the-art (SOTA) algorithms by up to 2.01%/1.27% in 3D Object Detection and up to 1.29% improvement over baselines in semantic segmentation tasks under various types of uncertainties.Notably, $HyperDUM$ requires $2.36\times$ less Floating Point Operations and up to $38.30\times$ less parameters than SOTA methods, providing an efficient solution for real-world autonomous systems.

Subject: CVPR.2025 - Poster