aCBd1FeE5Z@OpenReview

Total: 1

#1 Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation [PDF2] [Copy] [Kimi] [REL]

Authors: Juncheol Shin, Minsang Seok, Seonggon Kim, Eunhyeok Park

Model merging has emerged as a powerful technique for combining task-specific weights, achieving superior performance in multi-target domain adaptation. However, when applied to practical scenarios, such as quantized models, new challenges arise. In practical scenarios, quantization is often applied to target-specific data, but this process restricts the domain of interest and introduces discretization effects, making model merging highly non-trivial. In this study, we analyze the impact of quantization on model merging through the lens of error barriers. Leveraging these insights, we propose a novel post-training quantization, HDRQ - Hessian and distant regularizing quantization - that is designed to consider model merging for multi-target domain adaptation. Our approach ensures that the quantization process incurs minimal deviation from the source pre-trained model while flattening the loss surface to facilitate smooth model merging. To our knowledge, this is the first study on this challenge, and extensive experiments confirm its effectiveness.

Subject: ICML.2025 - Poster