holler25@interspeech_2025@ISCA

Total: 1

#1 Using and comprehending language in face-to-face conversation [PDF1] [Copy] [Kimi1] [REL]

Author: Judith Holler

Abstract Face-to-face conversational interaction is at the very heart of human sociality and the natural ecological niche in which language has evolved and is acquired. Yet, we still know rather little about how utterances are produced and comprehended in this environment. In this talk, I will focus on how hand gestures, facial and head movements are organised to convey semantic and pragmatic meaning in conversation, as well as on how the presence and timing of these signals impacts utterance comprehension and responding. Specifically, I will present studies based on complementary approaches, which feed into and inform one another. This includes qualitative and quantitative multimodal corpus studies showing that visual signals indeed often occur early, and experimental comprehension studies, which are based on and inspired by the corpus results, implementing controlled manipulations to test for causal effects between visual bodily signals and comprehension processes and mechanisms. These experiments include behavioural and EEG studies, most of them using multimodally animated virtual characters. Together, the findings provide evidence for the hypothesis that visual bodily signals form an integral part of semantic and pragmatic meaning communication in conversational interaction, and that they facilitate language processing, especially due to their timing and the predictive potential they gain through their temporal orchestration. Biography Judith Holler is Associate Professor at the Donders Institute for Brain, Cognition & Behaviour, Radboud University where she leads the research group Communication in Social Interaction, and senior investigator at the Max Planck Institute for Psycholinguistics. Her research program investigates human language in the very environment in which it has evolved, is acquired, and used most: face-to-face interaction. Within this context, Judith focuses on the semantics and pragmatics of human communication from a multimodal perspective considering spoken language within the rich, visual infrastructure that embeds it, such as manual gestures, head movements, facial signals, and gaze. She uses a combination of methods from different fields to investigate human multimodal communication, including quantitative conversational corpus analyses, in-situ eyetracking, behavioural and neurocognitive experimentation using multimodal language stimuli involving virtual animations. Her research has been supported by a range of prestigious research grants from funders including the European Research Council (EU), The Dutch Science Foundation (NWO), Marie Curie Fellowships (EU), Economic & Social Research Council (UK), Parkinson UK, The Leverhulme Trust (UK), the British Academy (UK), Volkswagen Stiftung (Germany) and the German Science Foundation (DFG, Mercator Fellowships).

Subject: INTERSPEECH.2025 - Keynote