Total: 1
The safe deployment of artificial intelligence systems hinges on their ability to recognize and appropriately handle inputs they have not been trained for. Out-of-Distribution (OOD) detection aims to provide this capability, yet most existing methods are developed under idealized assumptions that do not hold in the real world. This thesis challenges these assumptions by systematically addressing four key practical challenges: the semantic ambiguity of unlabeled data, the presence of domain shifts and class imbalances, the scarcity of labeled training data, and the need to operate on dynamic video streams instead of static images. The core of this research is a suite of four novel deep learning frameworks, each designed to overcome one of these specific limitations. My contributions push the field of OOD detection from a laboratory problem towards a robust and practical technology, essential for building trustworthy AI.