2025.emnlp-main.1581@ACL

Total: 1

#1 Cacheback: Speculative Decoding With Nothing But Cache [PDF] [Copy] [Kimi1] [REL]

Authors: Zhiyao Ma, In Gim, Lin Zhong

We present Cacheback Decoding, a training-free and model-agnostic speculative decoding method that exploits the locality in language to accelerate Large Language Model (LLM) inference.Cacheback leverages only Least Recently Used (LRU) cache tables of token n-grams to generate draft sequences.Cacheback achieves state-of-the-art performance among comparable methods despite its minimalist design, and its simplicity allows easy integration into existing systems.Cacheback also shows potential for fast adaptation to new domains.

Subject: EMNLP.2025 - Main