lee22b@interspeech_2022@ISCA

Total: 1

#1 Regularizing Transformer-based Acoustic Models by Penalizing Attention Weights [PDF] [Copy] [Kimi1]

Authors: Munhak Lee ; Joon-Hyuk Chang ; Sang-Eon Lee ; Ju-Seok Seong ; Chanhee Park ; Haeyoung Kwon

The application of deep learning has significantly advanced the performance of automatic speech recognition (ASR) systems. Various components make up an ASR system, such as the acoustic model (AM), language model, and lexicon. Generally, the AM has benefited the most from deep learning. Numerous types of neural network-based AMs have been studied, but the structure that has received the most attention in recent years is the Transformer. In this study, we demonstrate that the Transformer model is more vulnerable to input sparsity compared to the convolutional neural network (CNN) and analyze the cause of performance degradation through structural characteristics of the Transformer. Moreover, we also propose a novel regularization method that makes the transformer model robust against input sparsity. The proposed sparsity regularization method directly regulates attention weights using silence label information in forced-alignment and has the advantage of not requiring additional module training and excessive computation. We tested the proposed method on five benchmarks and observed an average relative error rate reduction (RERR) of 4.7%.