October 10–11, 2019

Brendan Iribe Center for Computer Science and Engineering
University of Maryland
College Park, Md.

Sponsored by:
The National Science Foundation
The Department of Electrical and Computer Engineering and the Institute for Systems Research, University of Maryland

Download the workshop final report

Spoken Language Interaction with Robots: Research Issues and Recommendations

Report from the 2019 NSF Speech for Robotics Workshop

| Download PDF |

Thursday, October 10

8:15–8:45 am Breakfast
8:45–9:45 am Welcome from Tanya, Chair of ECE, Chair of CS and Intro of Participants
9:45–10:15 am Presentation on Prosody by Nigel Ward
10:15–10:30 am Coffee Break
10:30–11:30 am Breakout Session
11:30 am–12:15 pm Summaries and Discussion
12:15–1:15 pm Lunch
1:15–2:15 pm Breakout Session
2:15–3:00 pm Summaries and Discussion
3:00–3:15 pm Coffee Break
3:15–4:15 pm Session
4:15–5:00 pm Summaries and Discussion
6:30 pm Working Dinner

 

 

Friday, October 11

8:30–9:00 am Continental Breakfast
9:00–11:00 am Discussion of report outline, summaries and recommendations from Day 1
11:00–11:15 am Coffee break
11:15 am–12:30 pm Work in smaller groups to write drafts of sections
12:30–1:30 pm Lunch
1:30–3:00 pm Finalize draft of report for NSF

1. As situatedness and embodiment are the key characteristics of human-robot communication, what are the new challenges and opportunities for speech processing in this unique setting?

2. Can we identify several applications that can benefit from speech and robotics in the next 5-10 years? Some application area (e.g., health care, education, smart homes) may have a unique set of challenges. What would be these unique challenges associated with each application area?

3. What are lessons learned from conversational agents and how they apply to embodied social robots?

4. How should ASR systems be specialized for the specific domains of social robots? What are the known open-source speech tools?

5. How can we use robotic context to resolve ambiguity and improve speech recognition?

6. What role should emotions (along with other speech artifacts like pitch, timing, tone, etc.) play in speech-based HRI?

7. What is currently being done to take advantage of gesture and non-verbal cues in terms of understanding and generation?

8. How do we establish and maintain some sort of shared infrastructure which will allow roboticists to apply speech/language tools and allow speech researchers to have access to robots? What about data? Does it make sense to encourage shared tasks?

9. Does it make sense to have some support, e.g., a special training program, that will allow students to rotate among different labs? Internship opportunities in industrial labs?

10. What roles can industries play in all the above? Any funding mechanisms to support such collaboration?

Carol Espy-Wilson (Chair), University of Maryland

Abeer Alwan, UCLA

Joyce Chai, University of Michigan

Dinesh Manocha, University of Maryland

Matthew Marge, U.S. Army Research Laboratory

Raymond Mooney, University of Texas at Austin

Speech

Carol Espy-Wilson, University of Maryland

Abeer Alwan, UCLA

Jonathan Fiscus, National Institute of Standards and Technology

Mary Harper, retired

Roger Moore, University of Sheffield

Mari Ostendorf, University of Washington

Nia Peters, Air Force Research Laboratory

Alex Rudnicky, Carnegie Mellon University

Nigel Ward, University of Texas at El Paso

NLP

Hal Daumé, University of Maryland, Microsoft

Tanya Korelsky, National Science Foundation

Tong Sun, Adobe Research

Clare Voss, U.S. Army Research Laboratory

Zhou Yu, University of California, Davis

Robotics

Debadeepta Dey, Microsoft Research

Susan Hill, U.S. Army Research Laboratory

Thomas Howard, University of Rochester

Dinesh Manocha, University of Maryland

Ross Mead, Semio

Erion Plaku, National Science Foundation

Chris Reardon, U.S. Army Research Laboratory

Robert St. Amant, U.S. Army Research Laboratory

Stefanie Tellex, Brown University

RoboNLP

Yoav Artzi, Cornell University

Mohit Bansal, University of North Carolina

Joyce Chai, University of Michigan

Casey Kennington, Boise State University

Ivana Kruijff-Korbayova, German Research Center for Artificial Intelligence

Matthew Marge, U.S. Army Research Laboratory

Cynthia Matuszek, University of Maryland Baltimore County

Raymond Mooney, University of Texas at Austin

Heather Pon-Barry, Mount Holyoke College

Matthias Scheutz, Tufts University

David Traum, University of Southern California


Top