4Ktifp48WD@OpenReview

Total: 1

#1 Differentially Private Optimization with Sparse Gradients [PDF] [Copy] [Kimi1] [REL]

Authors: Badih Ghazi, Cristóbal A Guzmán, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of _individual_ gradients. We start with new near-optimal bounds for the classic mean estimation problem but with sparse data, improving upon existing algorithms particularly for the high-dimensional regime. The corresponding lower bounds are based on a novel block-diagonal construction that is combined with existing DP mean estimation lower bounds. Next, we obtain pure- and approximate-DP algorithms with almost optimal rates for stochastic convex optimization with sparse gradients; the former represents the first nearly dimension-independent rates for this problem. Furthermore, by introducing novel analyses of bias reduction in mean estimation and randomly-stopped biased SGD we obtain nearly dimension-independent rates for near-stationary points for the empirical risk in nonconvex settings under approximate-DP.

Subject: NeurIPS.2024 - Poster