log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Fighting Black Boxes, Adversaries, and Bugs in Deep Learning
Friday, December 1, 2017, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

While deep learning has been hugely successful in producing highly accurate models, the resulting models are sometimes (i) difficult to interpret, (ii) susceptible to adversaries, and (iii) suffer from subtle implementation bugs due to their stochastic nature.  In this talk, I will take some initial steps towards addressing these problems of interpretability, robustness, and correctness using some classic mathematical tools.  First, influence functions from robust statistics can help us understand the predictions of deep networks by answering the question: which training examples are most influential on a particular prediction?  Second, semidefinite relaxations can be used to provide guaranteed upper bounds on the amount of damage an adversary can do for restricted models.  Third, we use the Lean proof assistant to produce a working implementation of stochastic computation graphs which is guaranteed to be bug-free.

Bio

Percy Liang is an Assistant Professor in the Computer Science and Statistics departments at Stanford.

His research focuses on developing trustworthy agents that can communicate effectively with people and improve over time through interaction. 

He identifies himself with the machine learning (ICML, NIPS) and natural language processing (ACL, NAACL, EMNLP) communities.

More details could be found on his webpage: https://cs.stanford.edu/~pliang/

This talk is organized by Octavian Suciu