gupta13b@interspeech_2013@ISCA

Total: 1

#1 Comparing computation in Gaussian mixture and neural network based large-vocabulary speech recognition [PDF] [Copy] [Kimi1]

Authors: Vishwa Gupta ; Gilles Boulianne

In this paper we look at real-time computing issues in large vocabulary speech recognition. We use the French broadcast audio transcription task from ETAPE 2011 for this evaluation. We compare word error rate (WER) versus overall computing time for hidden Markov models with Gaussian mixtures (GMM-HMM) and deep neural networks (DNN-HMM). We show that for a similar computing during recognition, the DNN-HMM combination is superior to the GMM-HMM. For a real-time computing scenario, the error rate for the ETAPE dev set is 23.5% for DNN-HMM versus 27.9% for the GMM-HMM: a significant difference in accuracy for comparable computing. Rescoring lattices (generated by DNN-HMM acoustic model) with a quadgram language model (LM), and then with a neural net LM reduces the WER to 22.0% while still providing real-time computing.