2025.findings-emnlp.212@ACL

Total: 1

#1 Learning What to Remember: Adaptive Probabilistic Memory Retention for Memory-Efficient Language Models [PDF] [Copy] [Kimi] [REL]

Authors: S M Rafiuddin, Muntaha Nujat Khan

Transformer attention scales quadratically with sequence length O(n2), limiting long-context use. We propose Adaptive Retention, a probabilistic, layer-wise token selection mechanism that learns which representations to keep under a strict global budget M. Retention is modeled with Bernoulli gates trained via a Hard-Concrete/variational relaxation and enforced with a simple top-M rule at inference, making the method differentiable and drop-in for standard encoders. Across classification, extractive QA, and long-document summarization, keeping only 30–50% of tokens preserves ≥ 95% of full-model performance while cutting peak memory by ∼ 35–45% and improving throughput by up to ∼ 1.8×. This architecture-agnostic approach delivers practical long-context efficiency without modifying base attention or task heads.

Subject: EMNLP.2025 - Findings