2025.acl-long.420@ACL

Total: 1

#1 SciVer: Evaluating Foundation Models for Multimodal Scientific Claim Verification [PDF1] [Copy] [Kimi3] [REL]

Authors: Chengye Wang, Yifei Shen, Zexi Kuang, Arman Cohan, Yilun Zhao

We introduce SciVer, the first benchmark specifically designed to evaluate the ability of foundation models to verify claims within a multimodal scientific context.SciVer consists of 3,000 expert-annotated examples over 1,113 scientific papers, covering four subsets, each representing a common reasoning type in multimodal scientific claim verification. To enable fine-grained evaluation, each example includes expert-annotated supporting evidence.We assess the performance of 21 state-of-the-art multimodal foundation models, including o4-mini, Gemini-2.5-Flash, Llama-3.2-Vision, and Qwen2.5-VL. Our experiment reveals a substantial performance gap between these models and human experts on SciVer.Through an in-depth analysis of retrieval-augmented generation (RAG), and human-conducted error evaluations, we identify critical limitations in current open-source models, offering key insights to advance models’ comprehension and reasoning in multimodal scientific literature tasks.

Subject: ACL.2025 - Long Papers