2021.emnlp-main.13@ACL

Total: 1

#1 HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization [PDF] [Copy] [Kimi1]

Authors: Ye Liu ; Jianguo Zhang ; Yao Wan ; Congying Xia ; Lifang He ; Philip Yu

To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HetFormer, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heterogeneous graph and directly learn heterogeneous relationships (edges) among nodes by Transformer. Extensive experiments on both single- and multi-document summarization tasks show that HetFormer achieves state-of-the-art performance in Rouge F1 while using less memory and fewer parameters.