dadvar23a@v216@PMLR

Total: 1

#1 Conditional Abstraction Trees for Sample-Efficient Reinforcement Learning [PDF] [Copy] [Kimi] [REL]

Authors: Mehdi Dadvar, Rashmeet Kaur Nayyar, Siddharth Srivastava

In many real-world problems, the learning agent needs to learn a problem’s abstractions and solution simultaneously. However, most such abstractions need to be designed and refined by hand for different problems and domains of application. This paper presents a novel top-down approach for constructing state abstractions while carrying out reinforcement learning (RL). Starting with state variables and a simulator, it presents a novel domain-independent approach for dynamically computing an abstraction based on the dispersion of Q-values in abstract states as the agent continues acting and learning. Extensive empirical evaluation on multiple domains and problems shows that this approach automatically learns abstractions that are finely-tuned to the problem, yield powerful sample efficiency, and result in the RL agent significantly outperforming existing approaches.

Subject: UAI.2023 - Oral