xu05@interspeech_2005@ISCA

Total: 1

#1 Using random forest language models in the IBM RT-04 CTS system [PDF] [Copy] [Kimi]

Authors: Peng Xu ; Lidia Mangu

One of the challenges in large vocabulary speech recognition is the availability of large amounts of data for training language models. In most state-of-the-art speech recognition systems, n-gram models with Kneser-Ney smoothing still prevail due to their simplicity and effectiveness. In this paper, we study the performance of a new language model, the random forest language model, in the IBM conversational telephony speech recognition system. We show that although the random forest language models are designed to deal with the data sparseness problem, they also achieve statistically significant improvements over n-gram models when the training data has over 500 million words.