N19-2024@ACL

Total: 1

#1 Neural Text Normalization with Subword Units [PDF] [Copy] [Kimi] [REL]

Authors: Courtney Mansfield ; Ming Sun ; Yuzong Liu ; Ankur Gandhe ; Björn Hoffmeister

Text normalization (TN) is an important step in conversational systems. It converts written text to its spoken form to facilitate speech recognition, natural language understanding and text-to-speech synthesis. Finite state transducers (FSTs) are commonly used to build grammars that handle text normalization. However, translating linguistic knowledge into grammars requires extensive effort. In this paper, we frame TN as a machine translation task and tackle it with sequence-to-sequence (seq2seq) models. Previous research focuses on normalizing a word (or phrase) with the help of limited word-level context, while our approach directly normalizes full sentences. We find subword models with additional linguistic features yield the best performance (with a word error rate of 0.17%).