BRAMS – CRBLM Lecture Series: Researcher Lecture by Dr. Claire Pelofi
Cortical tracking of high-level properties of music
Abstract: Music, just as language, is a highly complex set of information, which contains structural layers ranging from low-level acoustical features (pitch, timber), to high-level syntactic or semantic information. The way these features are intertwined and encoded into the brain to ultimately convey musical meaning and emotion remains largely unknown. Electrophysiological studies have been focusing on ERP analysis, in which a specific feature is probed through numerous repetitions. The time-locked neural response is then averaged to shed light on the encoding of the information. Albeit informative, this technique poorly reflects ecological conditions of music listening. Recent research have developed new tools specifically designed to track ongoing information in the neural signal using canonical correlation analysis (CCA) and temporal response function (mTRF). These two methods are based on ridge regression and matrix linear transforms which allows to measure the mapping of stimulus features into the neural signal. My research is dedicated to study the encoding of high-level musical properties, taking advantage of these new signal analysis techniques and working with neurophysiological signals collected from EEG, MEG or ECog. This line of research tackles questions such as the encoding of regularities in music, the neural trace of tension and release dynamics and quantify the effect of cultural learning.
Short bio: Dr. Claire Pelofi is a research scientist in David Poeppel’s lab at the Department of Psychology, New York University. She received a B.A. in Philosophy (2009) and a M.A in Neuroscience (2012) at École Normale Supérieure, Paris. She received her Ph.D. from the same institution in 2016. She joined Shihab Shamma’s lab at the University of Maryland as a post-doc associate and worked in tight collaboration with Mary Farbood at MARL, NYU. She is particularly interested in the link between music and speech acquisition and how to extend speech-based methodologies to music to highlight broader cognitive processes such as selective attention and memory. Her methodology is based on a mixture of psychophysics and electrophysiology data (EEG, MEG and ECoG) which she analyzes and decodes using various models of musical structure and neural response.