log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Interpretable Machine Learning: What it means, How we're getting there
Wednesday, September 20, 2017, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
As machine learning systems become ubiquitous, there is a growing interest in interpretable machine learning -- that is, systems that can provide human-interpretable rationale for their predictions and decisions.  However, our current desire for "interpretability" is as vague as asking for "good predictions" -- a desire that, while entirely reasonable, must be formalized into concrete objectives such as high average test performance (perhaps held-out likelihood is a good metric) or some kind of robust performance (perhaps sensitivity or specificity are more appropriate metrics).  In this talk, I'll discuss how we might think about formalizing interpretability, including insights from our collaborations with cognitive scientists to establish what kinds of explanation humans can process and with legal scholars on what a "right to explanation from AI systems" might practically mean.  I will also discuss ways in which we are optimizing machine learning models for human interpretability and correct explanations.
 

Joint work with Been Kim, Andrew Ross, Mike Wu, Michael Hughes, Menaka Narayanan, Sam Gershman, and the Berkman Klein Center; the product of discussions with countless collaborators and colleagues. 

 

 

Bio

Finale Doshi-Velez is an Assistant Professor of Computer Science at the Harvard Paulson School of Engineering and Applied Sciences.  Her interests lie at the intersection of machine learning and healthcare.  She completed her PhD at MIT, her postdoc at Harvard Medical School, and is a Marshall Scholar.

This talk is organized by Marine Carpuat