Total: 1
Language models (LMs) require effective episodic grounding—the ability to learn from and apply past experiences—to perform well at physical planning tasks. While current approaches struggle with scalability and integration of episodic memory, which is particularly limited for medium-sized LMs (7B parameters), larger LMs (70-405B) offer untapped potential through their hierarchical representations and extensive pre-trained knowledge. Therefore, to unlock larger LMs’ potential for grounding, we present a scalable weak-to-strong episodic learning framework that efficiently transfers episodic behaviors from smaller to larger LMs. It uses Monte Carlo tree search for structured experience collection with a novel distillation method that preserves LM capabilities while incorporating episodic memory. This enables larger LMs to leverage their inherent advantages for improved physical planning. Experiments show our solution outperforms top proprietary LMs by 3.45% across diverse planning and question-answering tasks. Layer-wise probing reveals systematic improvements in task alignment, particularly in later LM layers. It shows stable generalization to even unseen scenarios, even as planning steps increase, whereas baselines deteriorate sharply beyond a complexity threshold of four planning steps.