News Story
Shamma, Simon, Fritz receive five-year, $1.57 million NIH grant

Co-PI Jonathan Simon with computer representations of auditory responses.
The project combines psychoacoustic and physiological investigations of a fundamental perceptual component of auditory scene analysis known as auditory streaming. Auditory streaming is the everyday ability of humans and animals to parse complex acoustic information from multiple sound sources into meaningful auditory "streams." For instance, the ability to listen to someone at a cocktail party or to follow a violin in the orchestra both seem to rely on the ability to form auditory streams.
The researchers believe that streaming in complex environments is based on extracting stimulus regularities at various levels of representation, ranging from simple (peripheral) tonotopy to higher-level temporal and spectral features. They also hypothesize that streaming is reflected in, and can be predicted by, the response properties of neurons in the auditory cortex.
The results will be important in developing sensory prostheses and in acoustic-based human-computer interaction.
The Clark School team is collaborating with Andrew Oxenham, formerly of MIT, now with the University of Minnesota.
Published June 8, 2006