2025.findings-emnlp.162@ACL

Total: 1

#1 Visual Program Distillation with Template-Based Augmentation [PDF] [Copy] [Kimi] [REL]

Authors: Michal Shlapentokh-Rothman, Yu-Xiong Wang, Derek Hoiem

Adapting visual programming or prompting large language models (LLMs) to generate executable code for visual tasks like visual question answering (VQA) for specialized tasks or domains remains challenging due to high annotation and inference costs. We propose a low-cost visual program distillation method that can be used for models with at most 1 billion parameters and requires no human-generated program annotations. We achieve this through synthetic data augmentation based on decoupling programs into higher-level skills, called templates, and their corresponding arguments. Experimental results show that, with a relatively small amount of question/answer data, small language models can generate high-quality specialized visual programs with the added benefit of much faster inference.

Subject: EMNLP.2025 - Findings