Total: 1
Generative AI models, particularly diffusion models (DMs), have demonstrated exceptional capabilities in high-quality image synthesis. However, their large memorization capacity raises significant privacy concerns, especially when trained on sensitive datasets. This paper introduces DP-LoRA, a surprisingly simple yet effective framework for differentially private fine-tuning of latent diffusion models (LDMs) using Low-Rank Adaptation (LoRA). By fine-tuning only a small subset of parameters, DP-LoRA achieves state-of-the-art (SoTA) performance in privacy-preserving image generation while significantly improving the privacy-utility trade-off. DP-LoRA leverages pre-trained LDMs and integrates LoRA modules into attention blocks and projection layers, enabling parameter-efficient fine-tuning under Differential Privacy (DP) constraints. Extensive experiments on benchmarks such as CelebA-HQ demonstrate that DP-LoRA outperforms existing methods, achieving competitive Frechet Inception Distance (FID) scores with strict privacy budgets (e.g., epsilon \leq 10). Additionally, we provide a comprehensive analysis of the impact of LoRA rank, noise multiplicity, and trainable components on model performance. Our results highlight the potential of parameter-efficient techniques to scale privacy-preserving generative models to real-world applications, paving the way for safer deployment of diffusion models in sensitive domains. Anonymous codes available at \href https://github.com/EzzzLi/DP-LORA https://github.com/EzzzLi/DP-LORA .