Total: 1
The process of learning good policies from expert demonstrations, known as imitation learning (IL), has been proven effective in many applications. Adversarial imitation learning (AIL), a subset of IL methods, is particularly promising, but its theoretical foundation in the presence of unknown transitions has yet to be fully developed. This paper explores the theoretical underpinnings of AIL in this context, where the primary challenge is the stochastic and uncertain nature of environment transitions. We examine the expert sample complexity and interaction complexity required to recover good policies, which are of great practical interest. To this end, we establish a framework connecting reward-free exploration and AIL, and propose an algorithm, MB-TAIL, that achieves the minimax optimal expert sample complexity of $\widetilde{\mathcal{O}} (H^{3/2} |\mathcal{S}|/\varepsilon)$ and interaction complexity of $\widetilde{\mathcal{O}} (H^{3} |\mathcal{S}|^2 |\mathcal{A}|/\varepsilon^2)$. Here, $H$ represents the planning horizon, $|\mathcal{S}|$ is the state space size, $|\mathcal{A}|$ is the action space size, and $\varepsilon$ is the desired imitation gap. MB-TAIL is the first algorithm to achieve this level of expert sample complexity in the unknown transition setting and improves upon the interaction complexity of the best-known algorithm, OAL, by $\mathcal{O} (H)$. Additionally, we demonstrate the generalization ability of MB-TAIL by extending it to the function approximation setting and proving that it can achieve expert sample and interaction complexity independent of $|\mathcal{S}|$.