vaissiere04@interspeech_2004@ISCA

Total: 1

#1 From X-ray or MRU data to sounds through articulatory synthesis: towards an integrated view of the speech communication process [PDF] [Copy] [Kimi]

Author: Jacqueline Vaissière

This tutorial presents an integrated method to simulate the transfer from X-ray (or MRI) data to acoustics and finally to sounds. It illustrates the necessity of an articulatory model (hereby Maeda’s model) so as to: Construct realistic stimuli (sounds that human beings could really produce) for psychoacoustic experiments. "hear" what kind of sounds the vocal tract of a man or a woman, of a new-born or a monkey could produce and inversely, what vocal shapes could produce a sound with given acoustic characteristics. Study the correlation between the observed subtle articulatory and acoustic differences and the choices of preferred prototypes in the realisation and perception of the same API symbol by native speakers of different languages. Modelise vowels and consonants in context, and differentiate between transitional gestures which are necessary in a co-articulation process, but not essential in order to differentiate phonemes. Simulate the acoustic and perceptual consequences of the articulatory deformation realized by the singers (e.g. singing formant), or in case of pathological voices.