Chang_ASHiTA_Automatic_Scene-grounded_HIerarchical_Task_Analysis@CVPR2025@CVF

Total: 1

#1 ASHiTA: Automatic Scene-grounded HIerarchical Task Analysis [PDF] [Copy] [Kimi] [REL]

Authors: Yun Chang, Leonor Fermoselle, Duy Ta, Bernadette Bucher, Luca Carlone, Jiuguang Wang

While recent work in scene reconstruction and understanding has made strides in grounding natural language to physical 3D environments, it is still challenging to ground abstract, high-level instructions to a 3D scene. High-level instructions might not explicitly invoke semantic elements in the scene, and even the process of breaking a high-level task into a set of more concrete subtasks —a process called hierarchical task analysis— is environment-dependent. In this work, we propose ASHiTA, the first framework that generates a task hierarchy grounded to a 3D scene graph by breaking down high-level tasks into grounded subtasks. ASHiTA alternates LLM-assisted hierarchical task analysis —to generate the task breakdown— with task-driven scene graph construction to generate a suitable representation of the environment. Our experiments show that ASHiTA performs significantly better than LLM baselines in breaking down high-level tasks into environment-dependent subtasks and is additionally able to achieve grounding performance comparable to state-of-the-art methods

Subject: CVPR.2025 - Poster