gJclyLFSdU@OpenReview

Total: 1

#1 The World Is Bigger: A Computationally-Embedded Perspective on the Big World Hypothesis [PDF3] [Copy] [Kimi3] [REL]

Authors: Alex Lewandowski, Aditya A. Ramesh, Edan Meyer, Dale Schuurmans, Marlos C. Machado

Continual learning is often motivated by the idea, known as the big world hypothesis, that the "world is bigger" than the agent. Recent problem formulations capture this idea by explicitly constraining an agent relative to the environment. These constraints lead to solutions in which the agent continually adapts to best use its limited capacity, rather than converging to a fixed solution. However, explicit constraints can be ad hoc, difficult to incorporate, and limiting to the effectiveness of scaling up the agent's capacity. In this paper, we characterize a problem setting in which an agent, regardless of its capacity, is implicitly constrained by being embedded in the environment. In particular, we introduce a computationally-embedded perspective that represents an embedded agent as an automaton simulated within a universal (formal) computer. We prove that such an automaton is implicitly constrained and that it is equivalent to an agent that interacts with a partially observable Markov decision process over a countably infinite state-space. We then propose an objective for this setting, which we call interactivity, that measures an agent's ability to continually adapt its behaviour and to continually learn new predictions. We develop a reinforcement learning algorithm for maximizing interactivity and a synthetic benchmark to experimentation on continual learning. Our results indicate that deep nonlinear networks struggle to sustain interactivity whereas deep linear networks can achieve higher interactivity as capacity increases.

Subject: NeurIPS.2025 - Spotlight