Yang_QR-LoRA_Efficient_and_Disentangled_Fine-tuning_via_QR_Decomposition_for_Customized@ICCV2025@CVF

Total: 1

#1 QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation [PDF] [Copy] [Kimi1] [REL]

Authors: Jiahui Yang, Yongjia Ma, Donglin Di, Jianxun Cui, Hao Li, Wei Chen, Yan Xie, Xun Yang, Wangmeng Zuo

Existing text-to-image models often rely on parame- ter fine-tuning techniques such as Low-Rank Adaptation (LoRA) to customize visual attributes. However, when com- bining multiple LoRA models for content-style fusion tasks, unstructured modifications of weight matrices often lead to undesired feature entanglement between content and style attributes. We propose QR-LoRA, a novel fine-tuning frame- work leveraging QR decomposition for structured parame- ter updates that effectively separate visual attributes. Our key insight is that the orthogonal Q matrix naturally min- imizes interference between different visual features, while the upper triangular R matrix efficiently encodes attribute- specific transformations. Our approach fixes both Q and R matrices while only training an additional task-specific R matrix. This structured design reduces trainable param- eters to half of conventional LoRA methods and supports effective merging of multiple adaptations without cross- contamination due to the strong disentanglement properties between R matrices. Experiments demonstrate that QR- LoRA achieves superior disentanglement in content-style fusion tasks, establishing a new paradigm for parameter- efficient, disentangled fine-tuning in generative models. The project page is available at: https://luna-ai-lab. github.io/QR-LoRA/

Subject: ICCV.2025 - Poster