jain24@interspeech_2024@ISCA

Total: 1

#1 Multimodal Segmentation for Vocal Tract Modeling [PDF] [Copy] [Kimi] [REL]

Authors: Rishi Jain ; Bohan Yu ; Peter Wu ; Tejas Prabhune ; Gopala Anumanchipalli

Accurate modeling of the vocal tract is necessary to construct articulatory representations for interpretable speech processing and linguistics. However, vocal tract modeling is challenging because many internal articulators are occluded from external motion capture technologies. Real-time magnetic resonance imaging (RT-MRI) allows measuring precise movements of internal articulators during speech, but annotated datasets of MRI are limited in size due to time-consuming and computationally expensive labeling methods. We first present a deep labeling strategy for the RT-MRI video using a vision-only segmentation approach. We then introduce a multimodal algorithm using audio to improve segmentation of vocal articulators. Together, we set a new benchmark for vocal tract modeling in MRI video segmentation and use this to release labels for a 75-speaker RT-MRI dataset, increasing the amount of labeled public RT-MRI data of the vocal tract by over a factor of 9. The code and dataset labels can be found at rishiraij.github.io/multimodal-mri-avatar/.