The research is fundamentally interdisciplinary, at the interface between cognitive sciences, neurophysiology, and computational neuroscience. It promotes a theoretical framework that spells out how perception arises from interactions between cognitive influences and stimulus via a rapid-adaptive process at the neuronal level. This frame work guides all modelling and experimental data interpretation, and has clear implications to sensory perception and multimodal interactions. The key challenges lie in the three fields of neuroscience, psychoacoustics, and computational neuroscience as follows. The fundamental hypothesis of this research is that a rapid-adaptive process alters neuronal circuits and selectivity during perception of sound, and that this plasticity is enhanced by the existence of a temporally coherent structure in the acoustic stimuli. This hypothesis will be thoroughly investigated and adopted into the architecture of current computational models of auditory processing and sound streaming. Application of this model, especially in speech segregation and enhancement will be addressed to demonstrated the validity of these ideas for mimicking human abilities.
This project is organized around three specific aims, each composed of several projects. AIM I: Using ambiguous percepts to explore rapid plasticity. AIM II: Role of stimulus coherence in pattern formation. AIM III: Tying it all together. Computational models of streaming with complex signals.
This is a five-year, $163K grant.