2025.acl-short.72@ACL

Total: 1

#1 Transferring Textual Preferences to Vision-Language Understanding through Model Merging [PDF1] [Copy] [Kimi] [REL]

Authors: Chen-An Li, Tzu-Han Lin, Yun-Nung Chen, Hung-yi Lee

Large vision-language models (LVLMs) perform outstandingly across various multimodal tasks. However, their ability to evaluate generated content remains limited, and training vision-language reward models (VLRMs) with preference data is computationally expensive. This paper explores a training-free alternative by merging text-based reward models (RMs) with LVLMs to create VLRMs. Our approach shows that integrating these models leads to improved performance over LVLMs’ scoring and text-based RMs, offering an efficient method for incorporating textual preferences into LVLMs.

Subject: ACL.2025 - Short Papers