2025.findings-emnlp.717@ACL

Total: 1

#1 Do We Really Need All Those Dimensions? An Intrinsic Evaluation Framework for Compressed Embeddings [PDF1] [Copy] [Kimi] [REL]

Authors: Nathan Inkiriwang, Necva Bölücü, Garth Tarr, Maciej Rybinski

High-dimensional text embeddings are foundational to modern NLP but costly to store and use. While embedding compression addresses these challenges, selecting the best compression method remains difficult. Existing evaluation methods for compressed embeddings are either expensive or too simplistic. We introduce a comprehensive intrinsic evaluation framework featuring a suite of task-agnostic metrics that together provide a reliable proxy for downstream performance. A key contribution is \operatorname{EOS}k, a novel spectral fidelity measure specifically designed to be robust to embedding anisotropy. Through extensive experiments on diverse embeddings across four downstream tasks, we demonstrate that our intrinsic metrics reliably predict extrinsic performance and reveal how different embedding architectures depend on distinct geometric properties. Our framework provides a practical, efficient, and interpretable alternative to standard evaluations for compressed embeddings.

Subject: EMNLP.2025 - Findings