Parikh_RoadSocial_A_Diverse_VideoQA_Dataset_and_Benchmark_for_Road_Event@CVPR2025@CVF

Total: 1

#1 RoadSocial: A Diverse VideoQA Dataset and Benchmark for Road Event Understanding from Social Video Narratives [PDF] [Copy] [Kimi] [REL]

Authors: Chirag Parikh, Deepti Rawat, Rakshitha R. T., Tathagata Ghosh, Ravi Kiran Sarvadevabhatla

We introduce **RoadSocial, a large-scale, diverse VideoQA dataset tailored for generic road event understanding from social media narratives**. Unlike existing datasets limited by regional bias, viewpoint bias and expert-driven annotations, RoadSocial captures the global complexity of road events with varied geographies, camera viewpoints (CCTV, handheld, drones) and rich social discourse. Our scalable semi-automatic annotation framework leverages Text LLMs and Video LLMs to generate comprehensive question-answer pairs across 12 challenging QA tasks, pushing the boundaries of road event understanding. RoadSocial is derived from social media videos spanning **14M frames** and **414K social comments**, resulting in **a dataset with 13.2K videos, 674 tags and 260K high-quality QA pairs**. We **evaluate 18 Video LLMs (open-source and proprietary, driving-specific and general-purpose)** on our road event understanding benchmark. We also demonstrate RoadSocial's utility in improving road event understanding capabilities of general-purpose Video LLMs.

Subject: CVPR.2025 - Poster