o0HgWRmyY1@OpenReview

Total: 1

#1 GraSS: Scalable Data Attribution with Gradient Sparsification and Sparse Projection [PDF] [Copy] [Kimi] [REL]

Authors: Pingbang Hu, Joseph Melkonian, Weijing Tang, Han Zhao, Jiaqi W. Ma

Gradient-based data attribution methods, such as influence functions, are critical for understanding the impact of individual training samples without requiring repeated model retraining. However, their scalability is often limited by the high computational and memory costs associated with per-sample gradient computation. In this work, we propose **GraSS**, a novel gradient compression algorithm and its variants **FactGraSS** for linear layers specifically, that explicitly leverage the inherent sparsity of per-sample gradients to achieve sub-linear space and time complexity. Extensive experiments demonstrate the effectiveness of our approach, achieving substantial speedups while preserving data influence fidelity. In particular, **FactGraSS** achieves up to 165% faster throughput on billion-scale models compared to the previous state-of-the-art baselines. Our code is publicly available at https://github.com/TRAIS-Lab/GraSS.

Subject: NeurIPS.2025 - Poster