huang24f@interspeech_2024@ISCA

Total: 1

#1 Improving Neural Biasing for Contextual Speech Recognition by Early Context Injection and Text Perturbation [PDF] [Copy] [Kimi] [REL]

Authors: Ruizhe Huang ; Mahsa Yarmohammadi ; Sanjeev Khudanpur ; Daniel Povey

Existing research suggests that automatic speech recognition(ASR) models can benefit from additional contexts (e.g., contact lists, user specified vocabulary). Rare words and named entities can be better recognized with contexts. In this work, we propose two simple yet effective techniques to improve context-aware ASR models. First, we inject contexts into the encoders at an early stage instead of merely at their last layers. Second, to enforce the model to leverage the contexts during training, we perturb the reference transcription with alternative spellings so that the model learns to rely on the contexts to make correct predictions. On LibriSpeech, our techniques together reduce the rare word error rate by 60% and 25% relatively compared to no biasing and shallow fusion, making the new state-of-the-art performance. On SPGISpeech and a real-world dataset ConEC, our techniques also yield good improvements over the baselines.