Wang_SHIFT_Smoothing_Hallucinations_by_Information_Flow_Tuning_for_Multimodal_Large@ICCV2025@CVF

Total: 1

#1 SHIFT: Smoothing Hallucinations by Information Flow Tuning for Multimodal Large Language Models [PDF] [Copy] [Kimi] [REL]

Authors: Sudong Wang, Yunjian Zhang, Yao Zhu, Enci Liu, Jianing Li, Yanwei Liu, Xiangyang Ji

Large Language Models (LLMs) are prone to hallucinations, which pose significant risks in their applications. Most existing hallucination detection methods rely on internal probabilities or external knowledge, and they are limited to identifying hallucinations at the sentence or passage level. In this paper, we introduce the first token-level, zero-resource hallucination detection framework, leveraging a novel approach inspired by the Mad Libs game. This method assesses the reliability of the input text by evaluating the consistency of information before and after the game. Building on this framework, we also propose an innovative automated hallucination generation technique and introduce a high-quality hallucination dataset, HalluWiki. Extensive experiments demonstrate that our approach achieves over 90% detection accuracy across different levels, establishing a new frontier in hallucination detection for LLMs.

Subject: ICCV.2025 - Poster