Ma_Forming_Auxiliary_High-confident_Instance-level_Loss_to_Promote_Learning_from_Label@CVPR2025@CVF

Total: 1

#1 Forming Auxiliary High-confident Instance-level Loss to Promote Learning from Label Proportions [PDF2] [Copy] [Kimi1] [REL]

Authors: Tianhao Ma, Han Chen, Juncheng Hu, Yungang Zhu, Ximing Li

Learning from label proportions (LLP), i.e. a challenging weakly-supervised learning task, aims to train a classifier by using bags of instances and the proportions of classes within bags, rather than annotated labels for each instance. Beyond the traditional bag-level loss, the mainstream methodology of LLP is to incorporate an auxiliary instance-level loss with pseudo-labels formed by predictions. Unfortunately, we empirically observed that the pseudo-labels are often inaccurate and even meaningless, especially for the scenarios with large bag sizes, hurting the classifier induction. To alleviate this problem, we suggest a novel LLP method, namely Learning Label Proportions with Auxiliary High-confident Instance-level Loss (L^2P-AHIL). Specifically, we propose a dual entropy-based weight (DEW) method to adaptively measure the confidences of pseudo-labels. It simultaneously emphasizes accurate predictions at the bag level and avoids smoothing predictions, which tend to be meaningless. We then form high-confident instance-level loss with DEW, and jointly optimize it with the bag-level loss in a self-training manner. The experimental results on benchmark datasets show that L^2P-AHIL can surpass the existing baseline methods, and the performance gain can be more significant as the bag size increases.

Subject: CVPR.2025 - Poster