c8AjdgdHnD@OpenReview

Total: 1

#1 DISCO: Disentangled Communication Steering for Large Language Models [PDF] [Copy] [Kimi1] [REL]

Authors: Max Torop, Aria Masoomi, Masih Eskandar, Jennifer Dy

A variety of recent methods guide large language model outputs via the inference-time addition of *steering vectors* to residual-stream or attention-head representations. In contrast, we propose to inject steering vectors directly into the query and value representation spaces within attention heads. We provide evidence that a greater portion of these spaces exhibit high linear discriminability of concepts --a key property motivating the use of steering vectors-- than attention head outputs. We analytically characterize the effect of our method, which we term *DISentangled COmmunication (DISCO) Steering*, on attention head outputs. Our analysis reveals that DISCO disentangles a strong but underutilized baseline, steering attention head inputs, which implicitly modifies queries and values in a rigid manner. In contrast, DISCO's direct modulation of these components enables more granular control. We find that DISCO achieves superior performance over a number of steering vector baselines across multiple datasets on LLaMA 3.1 8B and Gemma 2 9B, with steering efficacy scoring up to $19.1$% higher than the runner-up. Our results support the conclusion that the query and value spaces are powerful building blocks for steering vector methods. Our code is publicly available at https://github.com/MaxTorop/DISCO.

Subject: NeurIPS.2025 - Poster