hwEhsFLPh1@OpenReview

Total: 1

#1 Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks [PDF2] [Copy] [Kimi1] [REL]

Authors: Susmit Agrawal, Krishn Vishwas Kher, Saksham Mittal, Swarnim Maheshwari, Vineeth N. Balasubramanian

Organisms constantly pivot between tasks such as evading predators, foraging, traversing rugged terrain, and socializing, often within milliseconds. Remarkably, they preserve knowledge of once-learned environments sans catastrophic forgetting, a phenomenon neuroscientists hypothesize, is due to a singular neural circuitry dynamically overlayed by neuromodulatory agents such as dopamine and acetylcholine. In parallel, deep learning research addresses analogous challenges via domain generalization ($\textbf{DG}$) and continual learning ($\textbf{CL}$), yet these methods remain siloed, despite the brain’s ability to perform them seamlessly. In particular, prior work has not explored architectures involving associative memories ($\textbf{AM}$s), which are an integral part of biological systems, to jointly address these tasks. We propose Memory-Integrated Reconfigurable Adapters ($\textbf{MIRA}$), a unified framework that integrates Hopfield-style associative memory modules atop a shared backbone. These memory modules store adapter-weight updates as values and retrieve them via learned keys. Associative memory keys are learned post-hoc to index and retrieve an affine combination of stored adapter updates for any given task or domain on a per-sample basis. By varying only the task-specific objectives, we demonstrate that $\textbf{MIRA}$ seamlessly accommodates domain shifts and sequential task exposures under one roof. Empirical evaluations on standard benchmarks confirm that our $\textbf{AM}$-augmented architecture significantly enhances adaptability and retention: in $\textbf{DG}$, $\textbf{MIRA}$ achieves SoTA out-of-distribution accuracy, and in incremental learning settings, it outperforms architectures explicitly designed to handle catastrophic forgetting using generic $\textbf{CL}$ algorithms. Extensive ablation studies validate the necessity of both associative memory storage and post-hoc key learning for robust interpolated retrieval of adapters. By unifying adapter-based modulation with biologically inspired associative memory, $\textbf{MIRA}$ delivers rapid task switching and enduring knowledge retention in a single extensible architecture, charting a path toward more versatile and memory-augmented AI systems.

Subject: NeurIPS.2025 - Poster