karakos12@interspeech_2012@ISCA

Total: 1

#1 Deriving conversation-based features from unlabeled speech for discriminative language modeling [PDF] [Copy] [Kimi1]

Authors: Damianos Karakos ; Brian Roark ; Izhak Shafran ; Kenji Sagae ; Maider Lehr ; Emily Prud'hommeaux ; Puyang Xu ; Nathan Glenn ; Sanjeev Khudanpur ; Murat Saraclar ; Dan Bikel ; Mark Dredze ; Chris Callison-Burch ; Yuan Cao ; Keith Hall ; Eva Hasler ; Philip Koehn ; Adam Lopez ; Matt Post ; Darcey Riley

The perceptron algorithm was used in [1] to estimate discriminative language models which correct errors in the output of ASR systems. In its simplest version, the algorithm simply increases the weight of n-gram features which appear in the correct (oracle) hypothesis and decreases the weight of n-gram features which appear in the 1-best hypothesis. In this paper, we show that the perceptron algorithm can be successfully used in a semi-supervised learning (SSL) framework, where limited amounts of labeled data are available. Our framework has some similarities to graph-based label propagation in the sense that a graph is built based on proximity of unlabeled conversations, and then it is used to propagate confidences (in the form of features) to the labeled data, based on which perceptron trains a discriminative model. The novelty of our approach lies in the fact that the confidence "flows" from the unlabeled data to the labeled data, and not vice-versa, as is done traditionally in SSL. Experiments conducted at the 2011 CLSP Summer Workshop on the conversational telephone speech corpora Dev04f and Eval04f demonstrate the effectiveness of the proposed approach.