Lai_A_Tiny_Change_A_Giant_Leap_Long-Tailed_Class-Incremental_Learning_via@ICCV2025@CVF

Total: 1

#1 A Tiny Change, A Giant Leap: Long-Tailed Class-Incremental Learning via Geometric Prototype Alignment [PDF1] [Copy] [Kimi] [REL]

Authors: Xinyi Lai, Luojun Lin, Weijie Chen, Yuanlong Yu

Long-Tailed Class-Incremental Learning (LT-CIL) remains a fundamental challenge due to biased gradient updates caused by highly imbalanced data distributions and the inherent stability-plasticity dilemma. These factors jointly degrade tail-class performance and exacerbate catastrophic forgetting. To tackle these issues, we propose Geometric Prototype Alignment (GPA), a model-agnostic approach that calibrates classifier learning dynamics via geometric feature-space alignment. GPA initializes classifier weights by projecting frozen class prototypes onto a unit hypersphere, thereby disentangling magnitude imbalance from angular discriminability. During incremental updates, a Dynamic Anchoring mechanism adaptively adjusts classifier weights to preserve geometric consistency, effectively balancing plasticity for new classes with stability for previously acquired knowledge. Integrated into state-of-the-art CIL frameworks such as LUCIR and DualPrompt, GPA yields substantial gains, improving average incremental accuracy by 6.11% and reducing forgetting rates by 6.38% on CIFAR100-LT. Theoretical analysis further demonstrates that GPA accelerates convergence by 2.7X and produces decision boundaries approaching Fisher-optimality. Our implementation is available at https://github.com/laixinyi023/Geometric-Prototype-Alignment.

Subject: ICCV.2025 - Poster