2025.emnlp-main.1651@ACL

Total: 1

#1 PruneCD: Contrasting Pruned Self Model to Improve Decoding Factuality [PDF] [Copy] [Kimi] [REL]

Authors: Byeongho Yu, Changhun Lee, Jun-gyu Jin, Eunhyeok Park

To mitigate the hallucination problem in large language models, DoLa exploits early exit logits from the same model as a contrastive prior. However, we found that these early exit logits tend to be flat, low in magnitude, and fail to reflect meaningful contrasts. To address this, we propose PruneCD, a novel contrastive decoding method that constructs the amateur model via layer pruning rather than early exit. This design leads to more informative and well-aligned logits, enabling more effective contrastive decoding. Through qualitative and quantitative analyses, we demonstrate that PruneCD consistently improves factuality with minimal inference overhead, offering a robust and practical approach to mitigating hallucinations in LLMs.

Subject: EMNLP.2025 - Main