2021.naacl-industry.15@ACL

Total: 1

#1 LightSeq: A High Performance Inference Library for Transformers [PDF] [Copy] [Kimi] [REL]

Authors: Xiaohui Wang ; Ying Xiong ; Yang Wei ; Mingxuan Wang ; Lei Li

Transformer and its variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose , a highly efficient inference library for models in the Transformer family. includes a series of GPU optimization techniques to both streamline the computation of Transformer layers and reduce memory footprint. supports models trained using PyTorch and Tensorflow. Experimental results on standard machine translation benchmarks show that achieves up to 14x speedup compared with TensorFlow and 1.4x speedup compared with , a concurrent CUDA implementation. The code will be released publicly after the review.