UGaAXvav8S@OpenReview

Total: 1

#1 Improving Regret Approximation for Unsupervised Dynamic Environment Generation [PDF] [Copy] [Kimi] [REL]

Authors: Harry Mead, Bruno Lacerda, Jakob Nicolaus Foerster, Nick Hawes

Unsupervised Environment Design (UED) seeks to automatically generate training curricula for reinforcement learning (RL) agents, with the goal of improving generalisation and zero-shot performance. However, designing effective curricula remains a difficult problem, particularly in settings where small subsets of environment parameterisations result in significant increases in the complexity of the required policy. Current methods struggle with a difficult credit assignment problem and rely on regret approximations that fail to identify challenging levels, both of which are compounded as the size of the environment grows. We propose Dynamic Environment Generation for UED (DEGen) to enable a denser level generator reward signal, reducing the difficulty of credit assignment and allowing for UED to scale to larger environment sizes. We also introduce a new regret approximation, Maximised Negative Advantage (MNA), as a significantly improved metric to optimise for, that better identifies more challenging levels. We show empirically that MNA outperforms current regret approximations and when combined with DEGen, consistently outperforms existing methods, especially as the size of the environment grows. We have made all our code available here: \url{https://github.com/HarryMJMead/Dynamic-Environment-Generation-for-UED}.

Subject: NeurIPS.2025 - Poster