chen14b@interspeech_2014@ISCA

Total: 1

#1 Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch [PDF] [Copy] [Kimi1]

Authors: X. Chen ; Y. Wang ; X. Liu ; Mark J. F. Gales ; Philip C. Woodland

Recurrent neural network language models (RNNLMs) are becoming increasingly popular for a range of applications including speech recognition. However, an important issue that limits the quantity of data, and hence their possible application areas, is the computational cost in training. A standard approach to handle this problem is to use class-based outputs, allowing systems to be trained on CPUs. This paper describes an alternative approach that allows RNNLMs to be efficiently trained on GPUs. This enables larger quantities of data to be used, and networks with an unclustered, full output layer to be trained. To improve efficiency on GPUs, multiple sentences are “spliced” together for each mini-batch or “bunch” in training. On a large vocabulary conversational telephone speech recognition task, the training time was reduced by a factor of 27 over the standard CPU-based RNNLM toolkit. The use of an unclustered, full output layer also improves perplexity and recognition performance over class-based RNNLMs.