Come and meet him in person!
(Please note that the lecture will be in English)
Tuesday December 15th, 2021, from 3:00 to 4:00 p.m., followed by a cocktail.
- Université de Montréal, Pavilion Marie-Victorin, Room D-427 : Please register via a Doodle link. Due to the room capacity, only the first 22 people who register will be able to attend the conference in person.
- The lecture will also be available via Zoom . No registration required.
Meeting ID: 834 9574 4471 / Passcode: 138827
- The lecture will also be streaming live on Facebook
On the role of voice acoustics in speech comprehension and person-identity perception
In this talk, I will present data from two recent projects investigating how different groups of listeners use acoustic voice features in everyday listening tasks.
In the first part of the talk, I will show how voice acoustics are used to enhance speech comprehension under adverse listening conditions—specifically when the auditory scene comprises a multitude of sounds heard at once. These “cocktail-party”-like situations pose a difficult problem for normal-hearing (NH) listeners and are particularly challenging for cochlear-implant (CI) users. We have previously shown (Kreitewolf et al., 2018) that NH listeners are better at comprehending target speech when they can group sounds based on continuity in two prominent voice features: glottal-pulse rate (GPR) and vocal-tract length (VTL). Here, I will present data demonstrating that CI users show a similar benefit from voice feature continuity when solving the cocktail-party problem.
In the second part of the talk, I will focus on the use of voice acoustics in person-identity perception. Theoretical work has proposed that listeners use acoustic information differently depending on whether they perceive identities from familiar and unfamiliar voices: Unfamiliar voices are thought to be processed based on close comparisons of acoustic properties. In contrast, familiar voices are thought to be processed based on diagnostic acoustic features that activate a stored person-specific representation. Here, I will present data from two experiments that challenge this theoretical claim by linking listeners’ voice identity judgements to complex acoustic representations of voice recordings.
Jens Kreitewolf is a Faculty Lecturer in the Departments of Psychology and Mathematics and Statistics at McGill University, where he teaches various courses in statistics, research methodology, and psychophysics. Jens received his M.Sc. in Psychology from Ruhr University Bochum (Germany) in 2009. Jens’ sustained interest in the auditory system was sparked during this time, when he investigated the electrophysiological correlates of attention to moving sounds. During his Leipzig years at the Max Planck Institute for Human Cognitive and Brain Sciences (2009-2016), Jens studied the sensory aspects of auditory speech and voice processing using psychophysics and functional neuroimaging. In 2014, Jens received his Ph.D. (Dr. rer. nat.) in Psychology from the Humboldt University of Berlin. In 2016, he was awarded an ACN Erasmus Mundus stipend to conduct research in auditory scene analysis at BRAMS. After four years as a postdoctoral fellow at the University of Lübeck (Germany)—where he extended his research program to topics as diverse as audiology, endocrinology, and social psychology—Jens is now back in Montreal and at BRAMS.