2025.findings-acl.353@ACL

Total: 1

#1 Vision-aided Unsupervised Constituency Parsing with Multi-MLLM Debating [PDF] [Copy] [Kimi] [REL]

Authors: Dong Zhang, Haiyan Tian, Qingying Sun, Shoushan Li

This paper presents a novel framework for vision-aided unsupervised constituency parsing (VUCP), leveraging multimodal large language models (MLLMs) pre-trained on diverse image-text or video-text data. Unlike previous methods requiring explicit cross-modal alignment, our approach eliminates this need by using pre-trained models like Qwen-VL and VideoLLaVA, which seamlessly handle multimodal inputs. We introduce two multi-agent debating mechanisms—consensus-driven (CD) and round-driven (RD)—to enable cooperation between models with complementary strengths. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on both image-text and video-text datasets for VUCP, improving robustness and accuracy.

Subject: ACL.2025 - Findings