log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Proposal: Rethinking Evaluation and Interpretability in NLP
Shi Feng
Monday, May 27, 2019, 10:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Recognizing issues in existing approaches, such as the discovery of adversarial examples, has been crucial to the development of better problem formulations and more robust models. Despite the recent interest in adversarial evaluation and interpretation for NLP models, there is still large room for improvement. For example, it's not clear how we can formulate the NLP counterpart of certifiable robustness from vision. In this talk, I'll first take a critical look at how we currently do interpretation and evaluation in NLP. Specifically, we look at how methods that are supposed to help us understand the models can mislead us. Then I'll present some preliminary results on how we might move towards fixing these issues.

Examining Committee: 
 
                          Chair:               Dr. Jordan Boyd-Graber
                          Dept rep:         Dr.  John Dickerson
                          Members:        Dr. Hal Daumé III
                                                    Dr. Marine Carpuat
                                                    Dr. Alexander Rush
This talk is organized by Tom Hurst