Total: 1
Recent work in Bayesian Experiment Design (BED) has shown the value of using Deep Learning (DL) to obtain highly efficient adaptive experiment designs. In this paper, we argue that a central bottleneck of DL training for BED is belief explosion. Specifically, as an agent progresses deeper into an experiment, the effective number of realisable beliefs grows enormously, placing significant sampling burdens on offline training schemes in an effort to gather experience from all regions of belief space. We argue that choosing an appropriate inductive bias for actor/critic networks is a critical component in mitigating the effects of belief explosion and has so far been overlooked in the BED literature. We show how Graph Neural Networks are particularly well-suited for BED DL training due to their domain permutation equivariance properties, resulting in multiple orders of magnitude improvement to sample efficiency compared to naive parameterizations.