Shneiderman: Faulty machine learning algorithms risk safety, threaten bias

The University of Maryland's Maryland Today reports that ISR-affiliated Distinguished University Professor Emeritus Ben Shneiderman (CS) has a new op-ed column on machine learning in The Hill.

Shneiderman writes that twin crashes of Boeing 737 MAX airliners in 2019 and 2020 illustrate how bad algorithms—processes computers use to solve problems—can lead to deadly outcomes.

Shneiderman continues that malfunctioning algorithms can cause you to lose a job, fail to qualify for a loan or be falsely accused of a crime—potentially because when algorithms are machine-learned, the AI can develop biases. National action is needed, Shneiderman advocates. Read his essay here at The Hill.

Shneiderman is the founding director of the University of Maryland's Human-Computer Interaction Laboratory. He has made significant contributions to the fields of human-computer interaction, user interface design, information visualization, social media, and human-centered AI. He is a Fellow of the Association for Computing Machinery, 1997; the American Association for the Advancement of Science, 2001; the National Academy of Engineering, 2010; the Institute of Electrical and Electronics Engineers, 2011; the National Academy of Inventors, 2015; and the IEEE Visualization Academy, 2019.

Shneiderman is a prolific author, including his most recent book, Human-Centered AI (2022). His notable book, Designing the User Interface: Strategies for Effective Human-Computer Interaction, is now in its sixth edition. Read more about Shneiderman on his Wikipedia page.

Published March 2, 2023