Total: 1
Reliable uncertainty quantification (UQ) is crucial for deploying deep learning models in safety-critical domains. Existing UQ methods often either rely on multi-pass inference, which increases computational cost, or restrict expressiveness by using only final-layer embeddings. In this work, we propose a lightweight evidential meta-model that leverages multi-layer feature fusion from a pretrained backbone, capturing both low-level features and high-level semantics to better estimate uncertainty. To further enhance epistemic fidelity, we integrate maximum weight-entropy (Max-WEnt) regularization, which encourages hypothesis diversity without altering the base network or adding test-time overhead. Experiments across two benchmark settings, medical (BACH, HAM10000, BreakHIS, DIV2K) and natural (ImageNet, SVHN, Fashion-MNIST, ImageNet-C) datasets, demonstrate consistent improvements in AUROC of out-of-distribution detection compared to prior post-hoc UQ methods. Our findings show that combining multi-layer evidential modeling with Max-WEnt provides a robust, efficient, and practical framework for trustworthy AI in high-stakes applications. The meta-model adds only ~0.8M parameters and trains in under four hours on a single 48GB GPU, making it practical for real-world deployment.