2021.acl-srw.3@ACL

Total: 1

#1 Transformer-Based Direct Hidden Markov Model for Machine Translation [PDF] [Copy] [Kimi1]

Authors: Weiyue Wang ; Zijian Yang ; Yingbo Gao ; Hermann Ney

The neural hidden Markov model has been proposed as an alternative to attention mechanism in machine translation with recurrent neural networks. However, since the introduction of the transformer models, its performance has been surpassed. This work proposes to introduce the concept of the hidden Markov model to the transformer architecture, which outperforms the transformer baseline. Interestingly, we find that the zero-order model already provides promising performance, giving it an edge compared to a model with first-order dependency, which performs similarly but is significantly slower in training and decoding.