IJCAI.2024 - Robotics

| Total: 8

#1 Integrating Intent Understanding and Optimal Behavior Planning for Behavior Tree Generation from Human Instructions [PDF] [Copy] [Kimi] [REL]

Authors: Xinglin Chen ; Yishuai Cai ; Yunxin Mao ; Minglong Li ; Wenjing Yang ; Weixia Xu ; Ji Wang

Robots executing tasks following human instructions in domestic or industrial environments essentially require both adaptability and reliability. Behavior Tree (BT) emerges as an appropriate control architecture for these scenarios due to its modularity and reactivity. Existing BT generation methods, however, either do not involve interpreting natural language or cannot theoretically guarantee the BTs' success. This paper proposes a two-stage framework for BT generation, which first employs large language models (LLMs) to interpret goals from high-level instructions, then constructs an efficient goal-specific BT through the Optimal Behavior Tree Expansion Algorithm (OBTEA). We represent goals as well-formed formulas in first-order logic, effectively bridging intent understanding and optimal behavior planning. Experiments in the service robot validate the proficiency of LLMs in producing grammatically correct and accurately interpreted goals, demonstrate OBTEA's superiority over the baseline BT Expansion algorithm in various metrics, and finally confirm the practical deployability of our framework. The project website is https://dids-ei.github.io/Project/LLM-OBTEA.

#2 Physics-Informed Trajectory Prediction for Autonomous Driving under Missing Observation [PDF] [Copy] [Kimi] [REL]

Authors: Haicheng Liao ; Chengyue Wang ; Zhenning Li ; Yongkang Li ; Bonan Wang ; Guofa Li ; Chengzhong Xu

This paper introduces a novel trajectory prediction approach for autonomous vehicles (AVs), adeptly addressing the challenges of missing observations and the need for adherence to physical laws in real-world driving environments. This study proposes a hierarchical two-stage trajectory prediction model for AVs. In the first stage we propose the Wavelet Reconstruction Network, an innovative tool expertly crafted for reconstructing missing observations, offering optional integration with state-of-the-art models to enhance their robustness. Additionally, the second stage of the model features the Wave Fusion Encoder, a quantum mechanics-inspired innovation for sophisticated vehicle interaction modeling. By incorporating the Kinematic Bicycle Model, we ensure that our predictions align with realistic vehicular kinematics. Complementing our methodological advancements, we introduce MoCAD-missing, a comprehensive real-world traffic dataset, alongside enhanced versions of the NGSIM and HighD datasets, designed to facilitate rigorous testing in environments with missing observations. Extensive evaluations demonstrate that our approach markedly outperforms existing methods, achieving high accuracy even in scenarios with up to 75% missing observations.

#3 HVOFusion: Incremental Mesh Reconstruction Using Hybrid Voxel Octree [PDF] [Copy] [Kimi] [REL]

Authors: Shaofan Liu ; Junbo Chen ; Jianke Zhu

Incremental scene reconstruction is essential to the navigation in robotics. Most of the conventional methods typically make use of either TSDF (truncated signed distance functions) volume or neural networks to implicitly represent the surface. Due to the voxel representation or involving with time-consuming sampling, they have difficulty in balancing speed, memory storage, and surface quality. In this paper, we propose a novel hybrid voxel-octree approach to effectively fuse octree with voxel structures so that we can take advantage of both implicit surface and explicit triangular mesh representation. Such sparse structure preserves triangular faces in the leaf nodes and produces partial meshes sequentially for incremental reconstruction. This storage scheme allows us to naturally optimize the mesh in explicit 3D space to achieve higher surface quality. We iteratively deform the mesh towards the target and recovers vertex colors by optimizing a shading model. Experimental results on several datasets show that our proposed approach is capable of quickly and accurately reconstructing a scene with realistic colors. Code is available at https://github.com/Frankuzi/HVOFusion

#4 RealDex: Towards Human-like Grasping for Robotic Dexterous Hand [PDF] [Copy] [Kimi] [REL]

Authors: Yumeng Liu ; Yaxun Yang ; Youzhuo Wang ; Xiaofei Wu ; Jiamin Wang ; Yichen Yao ; Sören Schwertfeger ; Sibei Yang ; Wenping Wang ; Jingyi Yu ; Xuming He ; Yuexin Ma

In this paper, we introduce RealDex, a pioneering dataset capturing authentic dexterous hand grasping motions infused with human behavioral patterns, enriched by multi-view and multimodal visual data. Utilizing a teleoperation system, we seamlessly synchronize human-robot hand poses in real time. This collection of human-like motions is crucial for training dexterous hands to mimic human movements more naturally and precisely. RealDex holds immense promise in advancing humanoid robot for automated perception, cognition, and manipulation in real-world scenarios. Moreover, we introduce a cutting-edge dexterous grasping motion generation framework, which aligns with human experience and enhances real-world applicability through effectively utilizing Multimodal Large Language Models. Extensive experiments have demonstrated the superior performance of our method on RealDex and other open datasets. The dataset and associated code are available at https://4dvlab.github.io/RealDex_page/.

#5 A New Guaranteed Outlier Removal Method Based on Plane Constraints for Large-Scale LiDAR Point Cloud Registration [PDF] [Copy] [Kimi] [REL]

Authors: Gang Ma ; Hui Wei ; Runfeng Lin ; Jialiang Wu

In this paper, we present a novel registration method based on plane constraints for large-scale LiDAR point clouds, effectively decoupling rotation estimation and translation estimation. For rotation estimation, we propose an outlier removal method that combines coarse filtering with rotation-invariant constraints and refined filtering based on computational geometric consistency checks, effectively pruning outliers and robustly estimating accurate relative rotations from plane normals. In translation estimation, we propose a component-wise method based on plane translation constraints to efficiently estimate relative translations. The robustness and effectiveness of our proposed method are empirically validated on three popular LiDAR point cloud datasets. The experimental results convincingly demonstrate that our approach achieves state-of-the-art performance.

#6 DVPE: Divided View Position Embedding for Multi-View 3D Object Detection [PDF] [Copy] [Kimi] [REL]

Authors: Jiasen Wang ; Zhenglin Li ; Ke Sun ; Xianyuan Liu ; Yang Zhou

Sparse query-based paradigms have achieved significant success in multi-view 3D detection for autonomous vehicles. Current research faces challenges in balancing between enlarging receptive fields and reducing interference when aggregating multi-view features. Moreover, different poses of cameras present challenges in training global attention models. To address these problems, this paper proposes a divided view method, in which features are modeled globally via the visibility cross-attention mechanism, but interact only with partial features in a divided local virtual space. This effectively reduces interference from other irrelevant features and alleviates the training difficulties of the transformer by decoupling the position embedding from camera poses. Additionally, 2D historical RoI features are incorporated into the object-centric temporal modeling to utilize high-level visual semantic information. The model is trained using a one-to-many assignment strategy to facilitate stability. Our framework, named DVPE, achieves state-of-the-art performance (57.2% mAP and 64.5% NDS) on the nuScenes test set.Codes will be available at https://github.com/dop0/DVPE.

#7 MAS-SAM: Segment Any Marine Animal with Aggregated Features [PDF] [Copy] [Kimi] [REL]

Authors: Tianyu Yan ; Zifu Wan ; Xinhao Deng ; Pingping Zhang ; Yang Liu ; Huchuan Lu

Recently, Segment Anything Model (SAM) shows exceptional performance in generating high-quality object masks and achieving zero-shot image segmentation. However, as a versatile vision model, SAM is primarily trained with large-scale natural light images. In underwater scenes, it exhibits substantial performance degradation due to the light scattering and absorption. Meanwhile, the simplicity of the SAM's decoder might lead to the loss of fine-grained object details. To address the above issues, we propose a novel feature learning framework named MAS-SAM for marine animal segmentation, which involves integrating effective adapters into the SAM's encoder and constructing a pyramidal decoder. More specifically, we first build a new SAM's encoder with effective adapters for underwater scenes. Then, we introduce a Hypermap Extraction Module (HEM) to generate multi-scale features for a comprehensive guidance. Finally, we propose a Progressive Prediction Decoder (PPD) to aggregate the multi-scale features and predict the final segmentation results. When grafting with the Fusion Attention Module (FAM), our method enables to extract richer marine information from global contextual cues to fine-grained local details. Extensive experiments on four public MAS datasets demonstrate that our MAS-SAM can obtain better results than other typical segmentation methods. The source code is available at https://github.com/Drchip61/MAS-SAM.

#8 ClothPPO: A Proximal Policy Optimization Enhancing Framework for Robotic Cloth Manipulation with Observation-Aligned Action Spaces [PDF] [Copy] [Kimi] [REL]

Authors: Libing Yang ; Yang Li ; Long Chen

Vision-based robotic cloth unfolding has made great progress recently. However, prior works predominantly rely on value learning and have not fully explored policy-based techniques. Recently, the success of reinforcement learning on the large language model has shown that the policy gradient algorithm can enhance policy with huge action space. In this paper, we introduce ClothPPO, a framework that employs a policy gradient algorithm based on actor-critic architecture to enhance a pre-trained model with huge 10^6 action spaces aligned with observation in the task of unfolding clothes. To this end, we redefine the cloth manipulation problem as a partially observable Markov decision process. A supervised pre-training stage is employed to train a baseline model of our policy. In the second stage, the Proximal Policy Optimization (PPO) is utilized to guide the supervised model within the observation-aligned action space. By optimizing and updating the strategy, our proposed method increases the garment's surface area for cloth unfolding under the soft-body manipulation task. Experimental results show that our proposed framework can further improve the unfolding performance of other state-of-the-art methods. Our project is available at https://vpx-ecnu.github.io/ClothPPO-website/.