682@2018@IJCAI

Total: 1

#1 Virtual-to-Real: Learning to Control in Visual Semantic Segmentation [PDF] [Copy] [Kimi] [REL]

Authors: Zhang-Wei Hong ; Yu-Ming Chen ; Hsuan-Kung Yang ; Shih-Yang Su ; Tzu-Yun Shann ; Yi-Hsiang Chang ; Brian Hsi-Lin Ho ; Chih-Chieh Tu ; Tsu-Ching Hsiao ; Hsin-Wei Hsiao ; Sih-Pin Lai ; Yueh-Chuan Chang ; Chun-Yi Lee

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.