2025.emnlp-main.1612@ACL

Total: 1

#1 3R: Enhancing Sentence Representation Learning via Redundant Representation Reduction [PDF] [Copy] [Kimi] [REL]

Authors: Longxuan Ma, Xiao Wu, Yuxin Huang, Shengxiang Gao, Zhengtao Yu

Sentence representation learning (SRL) aims to learn sentence embeddings that conform to the semantic information of sentences. In recent years, fine-tuning methods based on pre-trained models and contrastive learning frameworks have significantly advanced the quality of sentence representations. However, within the semantic space of SRL models, both word embeddings and sentence representations derived from word embeddings exhibit substantial redundant information, which can adversely affect the precision of sentence representations. Existing approaches predominantly optimize training strategies to alleviate the redundancy problem, lacking fine-grained guidance on reducing redundant representations. This paper proposes a novel approach that dynamically identifies and reduces redundant information from a dimensional perspective, training the SRL model to redistribute semantics on different dimensions, and entailing better sentence representations. Extensive experiments across seven semantic text similarity benchmarks demonstrate the effectiveness and generality of the proposed method. A comprehensive analysis of the experimental results is conducted, and the code/data will be released.

Subject: EMNLP.2025 - Main