573IcLusXq@OpenReview

Total: 1

#1 Brain-like Variational Inference [PDF] [Copy] [Kimi] [REL]

Authors: Hadi Vafaii, Dekel Galor, Jacob L. Yates

Inference in both brains and machines can be formalized by optimizing a shared objective: maximizing the evidence lower bound (ELBO) in machine learning, or minimizing variational free energy ($\mathcal{F}$) in neuroscience (ELBO = $-\mathcal{F}$). While this equivalence suggests a unifying framework, it leaves open how inference is implemented in neural systems. Here, we introduce FOND (*Free energy Online Natural-gradient Dynamics*), a framework that derives neural inference dynamics from three principles: (1) natural gradients on $\mathcal{F}$, (2) online belief updating, and (3) iterative refinement. We apply FOND to derive iP-VAE (*iterative Poisson variational autoencoder*), a recurrent spiking neural network that performs variational inference through membrane potential dynamics, replacing amortized encoders with iterative inference updates. Theoretically, iP-VAE yields several desirable features such as emergent normalization via lateral competition, and hardware-efficient integer spike count representations. Empirically, iP-VAE outperforms both standard VAEs and Gaussian-based predictive coding models in sparsity, reconstruction, and biological plausibility, and scales to complex color image datasets such as CelebA. iP-VAE also exhibits strong generalization to out-of-distribution inputs, exceeding hybrid iterative-amortized VAEs. These results demonstrate how deriving inference algorithms from first principles can yield concrete architectures that are simultaneously biologically plausible and empirically effective.

Subject: NeurIPS.2025 - Poster