42329@AAAI

Total: 1

#1 Scale Regularization for Stable Low-Rank Adaptation [PDF] [Copy] [Kimi] [REL]

Author: Tan Xeng Ian

Low-Rank Adaptation (LoRA) has emerged as a practical and efficient method for fine-tuning large language models under limited computational budgets. However, recent studies have shown that LoRA can suffer from training instability when applied to models with large embedding dimensions, due to the imbalanced in magnitudes between its low-rank matrices. In this work, we propose a novel regularization strategy that stabilizes LoRA training by penalizing logarithmic magnitude differences between the low-rank matrices, showing theoretically that it should lead to efficient feature learning. We further propose evaluation methods to systematically assess training stability and performance of our proposed solution along with other LoRA variants.

Subject: AAAI.2026 - Undergraduate Consortium