tailor23a@v216@PMLR

Total: 1

#1 Exploiting Inferential Structure in Neural Processes [PDF] [Copy] [Kimi] [REL]

Authors: Dharmesh Tailor, Mohammad Emtiyaz Khan, Eric Nalisnick

Neural Processes (NPs) are appealing due to their ability to perform fast adaptation based on a context set. This set is encoded by a latent variable, which is often assumed to follow a simple distribution. However, in real-word settings, the context set may be drawn from richer distributions having multiple modes, heavy tails, etc. In this work, we provide a framework that allows NPs’ latent variable to be given a rich prior defined by a graphical model. These distributional assumptions directly translate into an appropriate aggregation strategy for the context set. Moreover, we describe a message-passing procedure that still allows for end-to-end optimization with stochastic gradients. We demonstrate the generality of our framework by using mixture and Student-t assumptions that yield improvements in function modelling and test-time robustness.

Subject: UAI.2023 - Accept