2025.findings-emnlp.967@ACL

Total: 1

#1 Layer Duplication in LLMs [PDF] [Copy] [Kimi] [REL]

Authors: Neo Eyal, Nachum Dershowitz, Kfir Bar

We investigate the effect of duplicating multihead self-attention layers in large language models (LLMs) across a range of language tasks, with and without fine-tuning. The results demonstrate that duplicating the initial layers once or twice often yields a significant performance boost. Attention analysis uncovered the underlying mechanisms driving the improvement when performing layer duplication. This method enhances LLM capabilities with or without additional training or labeled data.

Subject: EMNLP.2025 - Findings