Spartalis_LoTUS_Large-Scale_Machine_Unlearning_with_a_Taste_of_Uncertainty@CVPR2025@CVF

Total: 1

#1 LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty [PDF] [Copy] [Kimi] [REL]

Authors: Christoforos N. Spartalis, Theodoros Semertzidis, Efstratios Gavves, Petros Daras

This paper presents LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models.LoTUS smooths the prediction probabilities of the model, mitigating its overconfidence that stems from data memorization, up to an information-theoretic bound.We evaluate LoTUS on the Transformer and ResNet18 models, against seven baseline methods, on four public datasets. Beyond established MU benchmarks, we evaluate unlearning on a large-scale dataset (ImageNet1k) which deters retraining, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. Experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. We will share code.

Subject: CVPR.2025 - Poster