9810@2024@ECCV

Total: 1

#1 CLIP-DPO: Vision-Language Models as a Source of Preference for Fixing Hallucinations in LVLMs [PDF3] [Copy] [Kimi2] [REL]

Authors: Yassine Ouali, Adrian Bulat, Brais Martinez, Georgios Tzimiropoulos

We present CLIP-DPO, a preference optimization method that leverages pretrained V-L (Vision-Language) embeddings models, such as CLIP, for DPO-based optimization of Vision LLMs. Starting from the initial pool of supervised fine-tuning data, we generate a diverse set of predictions, which are then ranked based on their CLIP image-text similarities to obtain a set of positive and negative pairs for DPO-based training. We show that this simple approach offers notable performance gains over a diverse set of benchmarks and vision-language tasks.

Subject: ECCV.2024 - Poster