TecJ926Vgn@OpenReview

Total: 1

#1 Differentially Private Federated Low Rank Adaptation Beyond Fixed-Matrix [PDF] [Copy] [Kimi] [REL]

Authors: Ming Wen, Jiaqi Zhu, Yuedong Xu, Yipeng Zhou, DINGDING HAN

Large language models (LLMs) typically require fine-tuning for domain-specific tasks, and LoRA offers a computationally efficient approach by training low-rank adaptors. LoRA is also communication-efficient for federated LLMs when multiple users collaboratively fine-tune a global LLM model without sharing their proprietary raw data. However, even the transmission of local adaptors between a server and clients risks serious privacy leakage. Applying differential privacy (DP) to federated LoRA encounters a dilemma: adding noise to both adaptors amplifies synthetic noise on the model, while fixing one adaptor impairs the learnability of fine-tuning. In this paper, we propose FedASK (Differentially Private Federated Low Rank Adaptation with Double SKetching) , a novel federated LoRA framework to enable effective updating of both low-rank adaptor matrices with robust differential privacy. Inspired by randomized SVD, our key idea is a two-stage sketching pipeline. This pipeline first aggregates carefully sketched, privacy-preserving local updates, and then reconstructs the global matrices on the server to facilitate effective updating of both adaptors. We theoretically prove FedASK's differential privacy guarantee and its exact aggregation property. Comprehensive experiments demonstrate that FedASK consistently outperforms baseline methods across a variety of privacy settings and data distributions.

Subject: NeurIPS.2025 - Poster