elias21@interspeech_2021@ISCA

Total: 1

#1 Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling [PDF] [Copy] [Kimi]

Authors: Isaac Elias ; Heiga Zen ; Jonathan Shen ; Yu Zhang ; Ye Jia ; R.J. Skerry-Ryan ; Yonghui Wu

This paper introduces Parallel Tacotron 2, a non-autoregressive neural text-to-speech model with a fully differentiable duration model which does not require supervised duration signals. The duration model is based on a novel attention mechanism and an iterative reconstruction loss based on Soft Dynamic TimeWarping, this model can learn token-frame alignments as well as token durations automatically. Experimental results show that Parallel Tacotron 2 outperforms baselines in subjective naturalness in several diverse multi speaker evaluations.