2025.emnlp-main.1170@ACL

Total: 1

#1 Language Models Can be Efficiently Steered via Minimal Embedding Layer Transformations [PDF] [Copy] [Kimi1] [REL]

Authors: Diogo Tavares, David Semedo, Alexander Rudnicky, Joao Magalhaes

Large Language Models (LLMs) are increasingly costly to fine-tune due to their size, with embedding layers alone accounting for up to 20% of model parameters. While Parameter-Efficient Fine-Tuning (PEFT) methods exist, they largely overlook the embedding layer. In this paper, we introduce TinyTE, a novel PEFT approach that steers model behavior via minimal translational transformations in the embedding space. TinyTE modifies input embeddings without altering hidden layers, achieving competitive performance while requiring approximately 0.0001% of the parameters needed for full fine-tuning. Experiments across architectures provide a new lens for understanding the relationship between input representations and model behavior—revealing them to be more flexible at their foundation than previously thought.

Subject: EMNLP.2025 - Main