aCPFvEg22L@OpenReview

Total: 1

#1 Gaussian Approximation and Concentration of Constant Learning-Rate Stochastic Gradient Descent [PDF] [Copy] [Kimi] [REL]

Authors: Ziyang Wei, Jiaqi Li, Zhipeng Lou, Wei Biao Wu

We establish a comprehensive finite-sample and asymptotic theory for stochastic gradient descent (SGD) with constant learning rates. First, we propose a novel linear approximation technique to provide a quenched central limit theorem (CLT) for SGD iterates with refined tail properties, showing that regardless of the chosen initialization, the fluctuations of the algorithm around its target point converge to a multivariate normal distribution. Our conditions are substantially milder than those required in the classical CLTs for SGD, yet offering a stronger convergence result. Furthermore, we derive the first Berry-Esseen bound -- the Gaussian approximation error -- for the constant learning-rate SGD, which is sharp compared to the decaying learning-rate schemes in the literature. Beyond the moment convergence, we also provide the Nagaev-type inequality for the SGD tail probabilities by adopting the autoregressive approximation techniques, which entails non-asymptotic large-deviation guarantees. These results are verified via numerical simulations, paving the way for theoretically grounded uncertainty quantification, especially with non-asymptotic validity.

Subject: NeurIPS.2025 - Poster