2025.findings-acl.275@ACL

Total: 1

#1 TriEmbed: Bridge the Gap between Text and Token Indices with Embedding Reparameterization [PDF] [Copy] [Kimi] [REL]

Authors: Baizhou Huang, Xiaojun Wan

The current paradigm of language modeling is a two-stage pipeline that first transforms raw text to token indices, where the distribution is then estimated. It inherently discards linguistic relations between tokens during tokenization, creating a fundamental gap. To address this, we propose TriEmbed, a reparameterization method for embeddings that incorporates the morphological relationships inherent in subword tokenizer algorithms. Specifically, by organizing the vocabulary into a Trie structure, we can encode these relations and reparametrize the embeddings, facilitating the recovery of other linguistic relationships during training. Empirical results across various settings demonstrate that TriEmbed outperforms conventional embeddings from the perspective of scaling, while offering more linguistically informative token embeddings.

Subject: ACL.2025 - Findings