shen19@interspeech_2019@ISCA

Total: 1

#1 Interpreting and Improving Deep Neural SLU Models via Vocabulary Importance [PDF] [Copy] [Kimi1]

Authors: Yilin Shen ; Wenhu Chen ; Hongxia Jin

Spoken language understanding (SLU) is a crucial component in virtual personal assistants. It consists of two main tasks: intent detection and slot filling. State-of-the-art deep neural SLU models have demonstrated good performance on benchmark datasets. However, these models suffer from the significant performance drop in practice after deployment due to the data distribution discrepancy between training and real user utterances. In this paper, we first propose four research questions that help to understand what the state-of-the-art deep neural SLU models actually learn. To answer them, we study the vocabulary importance using a novel Embedding Sparse Structure Learning (SparseEmb) approach. It can be applied onto various existing deep SLU models to efficiently prune the useless words without any additional manual hyperparameter tuning. We evaluate SparseEmb on benchmark datasets using two existing SLU models and answer the proposed research questions. Then, we utilize SparseEmb to sanitize the training data based on the selected useless words as well as the model re-validation during training. Using both benchmark and our collected testing data, we show that our sanitized training data helps to significantly improve the SLU model performance. Both SparseEmb and training data sanitization approaches can be applied onto any deep learning based SLU models.