t4aN2G7Ucc@OpenReview

Total: 1

#1 Dimension-adapted Momentum Outscales SGD [PDF2] [Copy] [Kimi1] [REL]

Authors: Damien Ferbach, Katie Everett, Gauthier Gidel, Elliot Paquette, Courtney Paquette

We investigate scaling laws for stochastic momentum algorithms on the power law random features model, parameterized by data complexity, target complexity, and model size. When trained with a stochastic momentum algorithm, our analysis reveals four distinct loss curve shapes determined by varying data-target complexities. While traditional stochastic gradient descent with momentum (SGD-M) yields identical scaling law exponents to SGD, dimension-adapted Nesterov acceleration (DANA) improves these exponents by scaling momentum hyperparameters based on model size and data complexity. This outscaling phenomenon, which also improves compute-optimal scaling behavior, is achieved by DANA across a broad range of data and target complexities, while traditional methods fall short. Extensive experiments on high-dimensional synthetic quadratics validate our theoretical predictions and large-scale text experiments with LSTMs show DANA's improved loss exponents over SGD hold in a practical setting.

Subject: NeurIPS.2025 - Spotlight