2025.emnlp-main.1255@ACL

Total: 1

#1 Hidden in Plain Sight: Reasoning in Underspecified and Misspecified Scenarios for Multimodal LLMs [PDF] [Copy] [Kimi] [REL]

Authors: Qianqi Yan, Hongquan Li, Shan Jiang, Yang Zhao, Xinze Guan, Ching-Chen Kuo, Xin Eric Wang

Multimodal large language models (MLLMs) are increasingly deployed in open-ended, real-world environments where inputs are messy, underspecified, and not always trustworthy. Unlike curated benchmarks, these settings frequently involve instructions that reference missing objects or contradictory facts, rely on ambiguous cues, or request infeasible actions. In such cases, success hinges not merely on task execution, but on the model’s ability to detect when something is silently wrong. This paper presents a systematic analysis of how current MLLMs handle such underspecified and misspecified scenarios: cases where flaws must be inferred from context rather than explicitly stated. Using a curated diagnostic suite spanning four categories of real-world failure modes, we evaluate nine MLLMs, including o3 and GPT-4o, and find that models often fail to surface hidden issues, even when they possess the necessary perceptual and reasoning skills. Explicit prompting reveals that the underlying capabilities exist but are frequently suppressed in favor of user compliance.We further show that simple inference-time interventions, such as cautious persona prompting and, in particular, requiring a clarifying question, can substantially recover performance. Our findings highlight a persistent gap between reasoning competence and behavioral compliance in current MLLMs, and suggest practical strategies for making these systems more trustworthy in underconstrained environments.

Subject: EMNLP.2025 - Main