Simon, Ding publish auditory cortex research in PNASISR-affiliated Associate Professor Jonathan Simon (ECE/Biology) and his Ph.D. student Nai Ding are co-authors of a paper published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). The online version of the article was published July 2, 2012.
“Emergence of neural encoding of auditory objects while listening to competing speakers” addresses the neural underpinnings of auditory scene analysis.
A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Ding and Simon address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography.
Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control.
Their results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation.
Published July 2, 2012