kzoNjAtwby@OpenReview

Total: 1

#1 A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents [PDF] [Copy] [Kimi] [REL]

Authors: Kaiwen Wang, Dawen Liang, Nathan Kallus, Wen Sun

We study risk-sensitive RL where the goal is learn a history-dependent policy that optimizes some risk measure of cumulative rewards.We consider a family of risks called the optimized certainty equivalents (OCE), which captures important risk measures such as conditional value-at-risk (CVaR), entropic risk and Markowitz's mean-variance. In this setting, we propose two meta-algorithms: one grounded in optimism and another based on policy gradients, both of which can leverage the broad suite of risk-neutral RL algorithms in an augmented Markov Decision Process (MDP). Via a reductions approach, we leverage theory for risk-neutral RL to establish novel OCE bounds in complex, rich-observation MDPs. For the optimism-based algorithm, we prove bounds that generalize prior results in CVaR RL and that provide the first risk-sensitive bounds for exogenous block MDPs. For the gradient-based algorithm, we establish both monotone improvement and global convergence guarantees under a discrete reward assumption. Finally, we empirically show that our algorithms learn the optimal history-dependent policy in a proof-of-concept MDP, where all Markovian policies provably fail.

Subject: ICML.2025 - Poster