ML Seminar: "Interpretable Machine Learning: Theory and Practice"

Wednesday, March 24, 2021
10:00 a.m.
Online
Kara Stamets
301 405 4471
stametsk@umd.edu

Title: Interpretable Machine Learning: Theory and Practice

Speaker: Dr. Rajiv Khanna, University of California, Berkeley

Abstract: Continued and remarkable empirical successes of increasingly complicated machine learning models such as neural networks without a sound theoretical understanding of success and failure conditions can leave a practitioner blind-sided and vulnerable, especially in critical applications such as self-driving cars and medical diagnosis. As such, there has been an enhanced interest in recent times in research on building interpretable
models as well as interpreting model predictions. In this talk, I will discuss various theoretical and practical aspects of interpretability in machine learning along both these directions through the lenses of feature attribution and example-based learning. In the first part of the talk, I will present novel theoretical results to bridge the gap in theory and practice for interpretable dimensionality reduction aka feature selection. Specifically, I will show that feature selection satisfies a weaker form of submodularity. Because of this connection, for any function, one can provide constant factor approximation guarantees that are solely dependent on the condition number of the function. Moreover, I will discuss that the cost of interpretability accrued because of selecting features as opposed to principal components is not as high as was previously thought to be. In the second part of the talk, I will discuss the development of a probabilistic framework for example-based machine learning to address ``which training data points are responsible for making given test predictions?“. This framework generalizes the classical influence functions. I will also present an application of this framework to understanding the transfer of adversarially trained neural network models.

Bio: Rajiv Khanna is currently a postdoc at the Department of Statistics at UC Berkeley. He is also associated with the Foundations of Data Analytics Institute (FODA) at UC Berkeley. Previously, he was a Research Fellow in the program of Foundations of Data Science at the Simons Institute for the Theory of Computing, also at UC Berkeley, and before that, he earned his PhD in Electrical and Computer Engineering at UT Austin. His research is focused on elucidating mechanisms of machine learning through optimization, learning theory and
interpretability. His work on beyond worst-case analysis on the Column Subset Selection won the best paper award at NeurIPS 2020.
----------
Topic: ML Seminar: Interpretable Machine Learning: Theory and PracticeTime: Mar 24, 2021 11:00 AM
Eastern Time (US and Canada)
Join Zoom Meeting
https://umd.zoom.us/j/95206729828?pwd=Zm9WZVhOcTkxNVp4V09ld2VWdjdrUT09
Meeting ID: 952 0672 9828
Passcode: 788375

Audience: Graduate  Faculty  Post-Docs 

remind we with google calendar

 

December 2025

SU MO TU WE TH FR SA
30 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31 1 2 3
Submit an Event