lee13b@interspeech_2013@ISCA

Total: 1

#1 Ensemble of machine learning and acoustic segment model techniques for speech emotion and autism spectrum disorders recognition [PDF] [Copy] [Kimi1]

Authors: Hung-yi Lee ; Ting-yao Hu ; How Jing ; Yun-Fan Chang ; Yu Tsao ; Yu-Cheng Kao ; Tsang-Long Pao

This study investigates the classification performances of emotion and autism spectrum disorders from speech utterances using ensemble classification techniques. We first explore the performances of three well-known machine learning techniques, namely, support vector machines (SVM), deep neural networks (DNN) and k-nearest neighbours (KNN), with acoustic features extracted by the openSMILE feature extractor. In addition, we propose an acoustic segment model (ASM) technique, which incorporates the temporal information of speech signals to perform classification. A set of ASMs is automatically learned for each category of emotion and autism spectrum disorders, and then the ASM sets decode an input utterance into series of acoustic patterns, with which the system determines the category for that utterance. Our ensemble system is a combination of the machine learning and ASM techniques. The evaluations are conducted using the data sets provided by the organizer of the INTERSPEECH 2013 Computational Paralinguistics Challenge.