T6d5IYr8PJ@OpenReview

Total: 1

#1 Certifying Deep Network Risks and Individual Predictions with PAC-Bayes Loss via Localized Priors [PDF] [Copy] [Kimi] [REL]

Author: Wen Dong

As machine learning increasingly relies on large, opaque foundation models powering generative and agentic AI, deploying these systems in safety-critical settings demands rigorous guarantees on their generalization beyond training data. PAC-Bayes theory offers principled certificates linking training performance to generalization risk, yet existing approaches are rarely practical: simple theoretical priors yield vacuous bounds, while data-dependent priors trained separately are computationally costly or introduce bias. To bridge this fundamental gap, we propose a localized PAC-Bayes prior—a structured, computationally efficient prior softly concentrated near parameters favored during standard training, enabling effective exploration without costly data splits. By integrating this localized prior directly into standard training loss, we produce practically tight generalization certificates without workflow disruption. Theoretically, under standard neural tangent kernel assumptions, our bound shrinks as networks widen and datasets grow, becoming negligible in practical regimes. Empirically, we certify generalization across image classification, NLP fine-tuning, and semantic segmentation, typically within three percentage points of test errors at ImageNet scale, while providing rigorous guarantees for individual predictions, selective rejection, and robustness.

Subject: NeurIPS.2025 - Poster