log in  |  register  |  feedback?  |  help  |  web accessibility
Sentence Understanding with Neural Networks and Natural Language Inference
Wednesday, April 5, 2017, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Artificial neural network models for language understanding problems represent an increasingly large and increasingly successful thread of research within natural language processing. When developing these models in typical settings, though, it can be difficult to identify the degree to which they capture the meanings of natural language sentences, and correspondingly difficult to identify research directions that are likely to yield progress on the underlying language understanding problem.
In this talk, I introduce natural language inference, the task of judging whether one sentence is true or false given that some other sentence is true, and argue that that task is distinctly effective as a means of developing and evaluating sentence understanding models in NLP. In this three sections, I’ll first introduce the task and the Stanford NLI corpus (SNLI, EMNLP ‘15), present the Stack-Augmented Parser-Interpreter Neural Network (SPINN, ACL ‘16), a model developed on that corpus, and finally introduce a new data collection effort and shared task called MultiNLI.

Sam Bowman recently started as an assistant professor at New York University. Sam is appointed in the Department of Linguistics and the Center for Data Science and is a co-director of the Machine Learning for Language group and the CILVR applied machine learning lab. He completed a PhD in Linguistics in 2016 at Stanford University as a member of the Stanford Natural Language Processing Group, and during that time was a frequent research intern at Google.

Sam's research focuses on building artificial neural network models for natural language processing problems that involve sentence understanding. Sam's 2016 work on deep generative models for text was covered by Quartz under the baffling headline "See the creepy, romantic poetry that came out of a Google AI system."

This talk is organized by Naomi Feldman