mathur22@interspeech_2022@ISCA

Total: 1

#1 DocLayoutTTS: Dataset and Baselines for Layout-informed Document-level Neural Speech Synthesis [PDF] [Copy] [Kimi1]

Authors: Puneet Mathur ; Franck Dernoncourt ; Quan Hung Tran ; Jiuxiang Gu ; Ani Nenkova ; Vlad Morariu ; Rajiv Jain ; Dinesh Manocha

We propose a new task of synthesizing speech directly from semi-structured documents where the extracted text tokens from OCR systems may not be in the correct reading order due to the complex document layout. We refer to this task as layout-informed document-level TTS and present the DocSpeech dataset which consists of 10K audio clips of a single-speaker reading layout-enriched Word document. For each document, we provide the natural reading order of text tokens, their corresponding bounding boxes, and the audio clips synthesized in the correct reading order. We also introduce DocLayoutTTS, a Transformer encoder-decoder architecture that generates speech in an end-to-end manner given a document image with OCR extracted text. Our architecture simultaneously learns text reordering and mel-spectrogram prediction in a multi-task setup. Moreover, we take advantage of curriculum learning to progressively learn longer, more challenging document-level text utilizing both \texttt{DocSpeech} and LJSpeech datasets. Our empirical results show that the underlying task is challenging. Our proposed architecture performs slightly better than competitive baseline TTS models with a pre-trained model providing reading order priors. We release samples of the DocSpeech dataset.