2025.naacl-industry.57@ACL

Total: 1

#1 VIT-Pro: Visual Instruction Tuning for Product Images [PDF1] [Copy] [Kimi] [REL]

Authors: Vishnu Prabhakaran, Purav Aggarwal, Vishruit Kulshreshtha, Arunita Das, Sahini Venkata Sitaram Sruti, Anoop Saladi

General vision-language models (VLMs) trained on web data struggle to understand and converse about real-world e-commerce product images. We propose a cost-efficient approach for collecting training data to train a generative VLM for e-commerce product images. The key idea is to leverage large-scale, loosely-coupled image-text pairs from e-commerce stores, use a pretrained LLM to generate multimodal instruction-following data, and fine-tune a general vision-language model using LoRA. Our instruction-finetuned model, VIT-Pro, can understand and respond to queries about product images, covering diverse concepts and tasks. VIT-Pro outperforms several general-purpose VLMs on multiple vision tasks in the e-commerce domain.

Subject: NAACL.2025 - Industry Track