W0GrWqqTJo@OpenReview

Total: 1

#1 Extractive Structures Learned in Pretraining Enable Generalization on Finetuned Facts [PDF] [Copy] [Kimi] [REL]

Authors: Jiahai Feng, Stuart Russell, Jacob Steinhardt

Pretrained language models (LMs) can generalize to implications of facts that they are finetuned on. For example, if finetuned on "John Doe lives in Tokyo," LMs correctly answer "What language do the people in John Doe's city speak?'' with "Japanese''. However, little is known about the mechanisms that enable this generalization or how they are learned during pretraining.We introduce extractive structures as a framework for describing how components in LMs (e.g., MLPs or attention heads) coordinate to enable this generalization. The structures consist of informative components that store training facts as weight changes, and upstream and downstream extractive components that query and process the stored information to produce the correct implication. We hypothesize that extractive structures are learned during pretraining when encountering implications of previously known facts. This yields two predictions: a data ordering effect where extractive structures can be learned only if facts precede their implications, and a weight grafting effect where extractive structures can be grafted to predict counterfactual implications.We empirically show these effects in the OLMo-7b, Llama 3-8b, Gemma 2-9b, and Qwen 2-7b models.Of independent interest, our results also indicate that fact learning can occur at both early and late layers, which lead to different forms of generalization.

Subject: ICML.2025 - Poster