BRAMS-CRBLM Lecture Series – Conference by Dr. Etienne Thoret, Institut de Neurosciences de la Timone in Marseille, France
Come and meet him in person!
- Université de Montréal, Pavilion Marie-Victorin, Room D-427 : Please register via the Doodle link.
Rethinking Hearing Without Fourier
Abstract: Our understanding of hearing has long been shaped by the legacy of Fourier analysis, with auditory models typically relying on fixed filter banks that decompose sounds into a sum of predetermined sinusoidal components. This framework has been central to modeling a wide range of auditory phenomena—from pitch perception and speech intelligibility to musical timbre recognition. However, despite their historical importance, such models exhibit limitations: they are often at odds at accounting together the perception of tonal signals such as speech and music jointly with the perception of large-band noisy texture. More importantly, they struggle to account for the inherently efficient, dynamic, and adaptive nature of real-world hearing. To address these limitations, we propose moving beyond static, frequency-based re presentations toward a fully temporal, signal-driven approach that better reflects the adaptive characteristics of hearing. Our framework is grounded in Empirical Mode Decomposition (EMD), a signal-driven technique that decomposes signals into intrinsic modes based solely on local extrema, without assuming any predefined basis functions. We adapt this method to model peripheral auditory processing in a way that captures key aspects of hearing behavior while remaining inherently flexible and responsive to the structure of the input signal. This approach exhibits behavior that aligns with many known properties of human hearing, such as frequency masking, roughness perception, efficient coding, and cochlear selectivity, offering a unified computational account of diverse psychoacoustic phenomena. By embracing an intrinsically signal-driven strategy, our model invites a rethinking of auditory processing—from the cochlea to higher cortical areas—beyond the constraints of traditional Fourier-based paradigms.
Bio: Etienne Thoret is a CNRS researcher at the Institut de Neurosciences de la Timone in Marseille, France. Trained in physics and acoustics, he earned his Ph.D. in Acoustics and Signal Processing from Aix-Marseille University in 2014, focusing on gesture–sound relationships. Before joining CNRS in 2023, he held postdoctoral positions at McGill University, École Normale Supérieure, and the Institute of Language, Communication and the Brain, working at the interface of audio signal processing, neuroscience, and explainable AI. His current research develops interpretable machine learning and adaptive signal processing methods to investigate cochlear and cortical representations of voices, music, and natural soundscapes.
