H4UMsoQrdI@OpenReview

Total: 1

#1 Benign Overfitting in Token Selection of Attention Mechanism [PDF1] [Copy] [Kimi1] [REL]

Authors: Keitaro Sakamoto, Issei Sato

Attention mechanism is a fundamental component of the transformer model and plays a significant role in its success.However, the theoretical understanding of how attention learns to select tokens is still an emerging area of research.In this work, we study the training dynamics and generalization ability of the attention mechanism, under classification problems with label noise.We show that, with the characterization of signal-to-noise ratio (SNR), the token selection of attention mechanism achieves ``benign overfitting'', i.e., maintaining high generalization performance despite fitting label noise.Our work also demonstrates an interesting delayed acquisition of generalization after an initial phase of overfitting.Finally, we provide experiments to support our theoretical analysis using both synthetic and real-world datasets.

Subject: ICML.2025 - Poster