Clark School Home UMD

ISR News Story

Espy-Wilson receives NSF grant for robust speech recognition


Professor Carol Espy-Wilson (ECE/ISR) is one of the principal investigators of a three-year, $1.8 million National Science Foundation collaborative research grant for "Landmark-based Robust Speech Recognition Using Prosody-Guided Models of Speech Variability." Mary Harper, an affiliate research professor in the University of Maryland Computer Science Department and professor at Purdue University, is co-principal investigator of the University of Maryland portion of the grant.

This collaborative project includes research at four other locations: UCLA (Abeer Alwan, PI); University of Illinois Urbana-Champaign (Jennifer Cole, PI and Mark Hasegawa-Johnson, co-PI); Yale University (Louis Goldstein, PI) and Boston University (Elliot Saltzman, PI).

The research will develop a system with performance comparable to humans in automatically transcribing unrestricted conversational speech, representing many speakers and dialects, and embedded in adverse acoustic environments.

Espy-Wilson's approach will apply new high-dimensional machine learning techniques, constrained by empirical and theoretical studies of speech production and perception, to learn from data the information structures that human listeners extract from speech. She will develop large-vocabulary psychologically realistic models of speech acoustics, pronunciation variability, prosody, and syntax by deriving knowledge representations that reflect those proposed for human speech production and speech perception, using machine learning techniques to adjust the parameters of all knowledge representations simultaneously in order to minimize the structural risk of the recognizer.

The team will develop nonlinear acoustic landmark detectors and pattern classifiers that integrate auditory-based signal processing and acoustic phonetic processing, are invariant to noise, change in speaker characteristics and reverberation, and can be learned in a semi-supervised fashion from labeled and unlabeled data. In addition, they will use variable frame rate analysis, which will allow for multi-resolution analysis, as well as implement lexical access based on gesture, using a variety of training data.

The work will improve communication and collaboration between people and machines and also improve understanding of how human produce and perceive speech. It brings together a team of experts in speech processing, acoustic phonetics, prosody, gestural phonology, statistical pattern matching, language modeling, and speech perception, with faculty across engineering, computer science and linguistics.

May 17, 2007


Prev   Next

 

 

Current Headlines

Coelho, Austin and Blackburn win best paper award at ICONS 2017

Ryzhov promoted, granted ISR/Smith School joint appointment

Miao Yu named Maryland Robotics Center director

Decade of TMV research leads to never-before-seen microsystems for energy storage, biosensors and self-sustaining systems

Former ISR postdoc Matthew McCarthy earns tenure at Drexel University

New TMV supercapacitor work featured in Nanotechweb article

Jonathan Fritz promoted to Research Scientist

ISR postdoc helps develop 'nanosponge' that erases and repairs incredibly small errors

ECE Professors Abshire, Goldsman, and Newcomb Participate in ISCAS 2017

Sandborn Awarded ASME Kos Ishii-Toshiba Award

News Resources

Return to Newsroom

Search News

Archived News

Events Resources

Events Calendar