Tf9eoTIIjh@OpenReview

Total: 1

#1 Preserving LLM Capabilities through Calibration Data Curation: From Analysis to Optimization [PDF1] [Copy] [Kimi] [REL]

Authors: Bowei He, Lihao Yin, Huiling Zhen, Shuqi LIU, Han Wu, Xiaokun Zhang, Mingxuan Yuan, Chen Ma

Post-training compression has been a widely employed approach to scale down large language model (LLM) and facilitate efficient inference. In various proposed compression methods, including pruning and quantization, calibration data plays a vital role by informing the weight importance and activation dynamic ranges. However, how calibration data impacts the LLM capability after compression is less explored. Few of the existing works, though recognizing the significance of this study, only investigate the language modeling or commonsense reasoning performance degradation from limited angles, like the data sources or sample amounts. More systematic research is still needed to examine the impacts on different LLM capabilities in terms of compositional properties and domain correspondence of calibration data. In this work, we aim at bridging this gap and further analyze underlying influencing mechanisms from the activation pattern perspective. Especially, we explore the calibration data's impacts on high-level complex reasoning capabilities, like math problem solving and code generation. Delving into the underlying mechanism, we find that the representativeness and diversity in activation space more fundamentally determine the quality of calibration data. Finally, we propose a calibration data curation framework based on such observations and analysis, enhancing the performance of existing post-training compression methods on preserving critical LLM capabilities. Our code is provided in [Link](https://github.com/BokwaiHo/COLA.git).

Subject: NeurIPS.2025 - Poster