2025.findings-acl.755@ACL

Total: 1

#1 MMSciBench: Benchmarking Language Models on Chinese Multimodal Scientific Problems [PDF] [Copy] [Kimi] [REL]

Authors: Xinwu Ye, Chengfan Li, Siming Chen, Wei Wei, Robert Tang

Recent advances in large language models (LLMs) and vision-language models (LVLMs) have shown promise across many tasks, yet their scientific reasoning capabilities remain untested, particularly in multimodal settings. We present MMSciBench, a benchmark for evaluating mathematical and physical reasoning through text-only and text-image formats, with human-annotated difficulty levels, solutions with detailed explanations, and taxonomic mappings. Evaluation of state-of-the-art models reveals significant limitations, with even the best model achieving only 63.77% accuracy and particularly struggling with visual reasoning tasks. Our analysis exposes critical gaps in complex reasoning and visual-textual integration, establishing MMSciBench as a rigorous standard for measuring progress in multimodal scientific understanding. The code for MMSciBench is open-sourced at GitHub, and the dataset is available at Hugging Face.

Subject: ACL.2025 - Findings