karafiat17@interspeech_2017@ISCA

Total: 1

#1 2016 BUT Babel System: Multilingual BLSTM Acoustic Model with i-Vector Based Adaptation [PDF] [Copy] [Kimi2]

Authors: Martin Karafiát ; Murali Karthick Baskar ; Pavel Matějka ; Karel Veselý ; František Grézl ; Lukáš Burget ; Jan Černocký

The paper provides an analysis of BUT automatic speech recognition systems (ASR) built for the 2016 IARPA Babel evaluation. The IARPA Babel program concentrates on building ASR system for many low resource languages, where only a limited amount of transcribed speech is available for each language. In such scenario, we found essential to train the ASR systems in a multilingual fashion. In this work, we report superior results obtained with pre-trained multilingual BLSTM acoustic models, where we used multi-task training with separate classification layer for each language. The results reported on three Babel Year 4 languages show over 3% absolute WER reductions obtained from such multilingual pre-training. Experiments with different input features show that the multilingual BLSTM performs the best with simple log-Mel-filter-bank outputs, which makes our previously successful multilingual stack bottleneck features with CMLLR adaptation obsolete. Finally, we experiment with different configurations of i-vector based speaker adaptation in the mono- and multi-lingual BLSTM architectures. This results in additional WER reductions over 1% absolute.