10541@2024@ECCV

Total: 1

#1 Reinforcement Learning via Auxillary Task Distillation [PDF2] [Copy] [Kimi3] [REL]

Authors: Abhinav Narayan Harish, Larry Heck, Josiah P Hanna, Zsolt Kira, Andrew Szot

We present Reinforcement Learning via Auxiliary Task Distillation (AuxDistill); a new method for leveraging reinforcement learning (RL) in long-horizon robotic control problems by distilling behaviors from auxiliary RL tasks. AuxDistill trains pixels-to-actions policies end-to-end with RL, without demonstrations, a learning curriculum, or pre-trained skills. AuxDistill achieves this by concurrently doing multi-task RL in auxiliary tasks which are easier than and relevant to the main task. Behaviors learned in the auxiliary tasks are transferred to solving the main task through a weighted distillation loss. In an embodied object-rearrangement task, we show AuxDistill achieves 27% higher success rate than baselines.

Subject: ECCV.2024 - Poster