Processing math: 100%

16813@AAAI

Total: 1

#1 Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration [PDF] [Copy] [Kimi] [REL]

Authors: Priyank Agrawal, Jinglin Chen, Nan Jiang

This paper studies regret minimization with randomized value functions in reinforcement learning. In tabular finite-horizon Markov Decision Processes, we introduce a clipping variant of one classical Thompson Sampling (TS)-like algorithm, randomized least-squares value iteration (RLSVI). Our ˜O(H2SAT) high-probability worst-case regret bound improves the previous sharpest worst-case regret bounds for RLSVI and matches the existing state-of-the-art worst-case TS-based regret bounds.

Subject: AAAI.2021 - Machine Learning