2025.acl-long.856@ACL

Total: 1

#1 Hierarchical Attention Generates Better Proofs [PDF2] [Copy] [Kimi3] [REL]

Authors: Jianlong Chen, Chao Li, Yang Yuan, Andrew C Yao

Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce Hierarchical Attention, a regularization method that aligns LLMs’ attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05% on miniF2F and 1.69% on ProofNet while reducing proof complexity by 23.81% and 16.50% respectively. The code and models will be available.

Subject: ACL.2025 - Long Papers