9V2SVEl1vP@OpenReview

Total: 1

#1 When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [PDF] [Copy] [Kimi] [REL]

Authors: Quan Shi, Carlos E Jimenez, Shunyu Yao, Nick Haber, Diyi Yang, Karthik R Narasimhan

As large language models (LLMs) increasingly serve as close collaborators for humans, it is crucial that they express their reasoning in ways that humans can understand and learn from. However, this capability remains relatively less understood and under-evaluated. To address this, we introduce a conceptual framework for such Human-AI knowledge transfer capabilities and conduct the first large-scale user study (N=118) explicitly designed to measure it. In our two-phase setup, humans first ideate with an LLM on problem-solving strategies, then independently implement solutions, isolating the influence of model reasoning on human understanding. Our findings reveal that while model benchmark performance correlates with collaborative outcomes, this relationship is notably inconsistent with significant outliers, highlighting that knowledge transfer is a distinct capability requiring dedicated optimization. Our analysis uncovers behavioral and strategic factors that mediate successful knowledge transfer, and we release our code, dataset, and evaluation framework to support future work on communicatively aligned models.

Subject: NeurIPS.2025 - Poster