Total: 1
Semantic segmentation neural networks (SSNs) are increasingly essential in high-stakes fields such as medical imaging, autonomous driving, and environmental monitoring, where robustness to input uncertainties and adversarial examples is crucial for ensuring safety and reliability. However, traditional probabilistic verification methods struggle to scale effectively with the size and depth of modern SSNs, especially when dealing with their high-dimensional, structured inputs/outputs. As the output dimension increases, these methods tend to become overly conservative, resulting in unnecessarily restrictive safety guarantees. In this work, we propose a probabilistic, data-driven verification algorithm that is architecture-agnostic and scalable, capable of handling the high-dimensional outputs of SSNs without introducing conservative and loose guarantees. We leverage efficient sampling-based reachability analysis to explore the space of possible outputs while maintaining computational feasibility. Our methodology is based on Conformal Inference (CI), which is known for its high data efficiency. However, CI tends to be overly conservative in high-dimensional spaces. To address this, in this paper, we introduce techniques to mitigate these sources of conservatism, enabling us to provide less conservative yet provable guarantees for SSNs. We validate our approach on large segmentation models applied to CamVid, OCTA-500 and Lung\_Segmentation, and Cityscapes datasets, showing that it can offer reliable safety guarantees while lowering the conservatism inherent in traditional methods. We also provide a public GitHub repository for this approach, to support reproducibility.