ZtzWvNKOCr@OpenReview

Total: 1

#1 Q3R: Quadratic Reweighted Rank Regularizer for Effective Low-Rank Training [PDF] [Copy] [Kimi] [REL]

Authors: Ipsita Ghosh, Ethan Nguyen, Christian Kümmerle

Parameter-efficient training, based on low-rank optimization, has become a highly successful tool for fine-tuning large deep-learning models. However, these methods fail at low-rank pre-training tasks where maintaining the low-rank structure and the objective remains a challenging task. We propose the Quadratic Reweighted Rank Regularizer dubbed Q3R, which leads to a novel low-rank inducing training strategy inspired by the iteratively reweighted least squares (IRLS) framework. Q3R is based on a quadratic regularizer term which majorizes a smoothed log determinant serving as rank surrogate objective. Unlike other low-rank training techniques, Q3R is able to train weight matrices with prescribed, low target ranks of models that achieve comparable predictive performance as dense models, with small computational overhead, while remaining fully compatible with existing architectures. In experiments, we are able to truncate 60% of the parameters of a ViT-Tiny parameters with marginal loss in CIFAR-10 performance and up to 80% with only 4% accuracy drop. The efficacy of Q3R is confirmed on Transformers across both image and language tasks, including for low-rank fine-tuning.

Subject: NeurIPS.2025 - Poster