2025.emnlp-main.183@ACL

Total: 1

#1 OBLIVIATE: Robust and Practical Machine Unlearning for Large Language Models [PDF] [Copy] [Kimi] [REL]

Authors: Xiaoyu Xu, Minxin Du, Qingqing Ye, Haibo Hu

Large language models (LLMs) trained over extensive corpora risk memorizing sensitive, copyrighted, or toxic content. To address this, we propose OBLIVIATE, a robust unlearning framework that removes targeted data while preserving model utility. The framework follows a structured process: extracting target tokens, building retain sets, and fine-tuning with a tailored loss function comprising three components—masking, distillation, and world fact. Using low-rank adapters (LoRA) ensures efficiency without compromising unlearning quality. We conduct experiments on multiple datasets, including Harry Potter series, WMDP, and TOFU, using a comprehensive suite of metrics: forget quality (via a new document-level memorization score), model utility, and fluency. Results demonstrate its effectiveness in resisting membership inference attacks, minimizing the impact on retained data, and maintaining robustness across diverse scenarios.

Subject: EMNLP.2025 - Main