log in  |  register  |  feedback?  |  help  |  web accessibility
Defeasible Inference in Natural Language
Virtual - https://umd.zoom.us/j/97647463145?pwd=UlQrQ0ttRHIyd3RCN0Vta01SdkJKdz09
Wednesday, October 21, 2020, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in classical AI and philosophy, defeasible inference has not been extensively studied in the context of contemporary data-driven research on natural language inference and common-sense reasoning. In this talk, I will present a collection of new datasets and tasks aimed at defeasible reasoning in a natural language setting.


Rachel Rudinger is a new Assistant Professor in the Department of Computer Science at the University of Maryland, and a member of UMIACS and Computational Linguistics and Information Processing Lab. She completed her Ph.D. in Computer Science from the Johns Hopkins Center for Language and Speech Processing in 2019 and was a Postdoctoral Young Investigator at the Allen Instutite for AI in Seattle in 2019-2020. Her research focuses on problems in natural language inference, common-sense reasoning, and bias and fairness in NLP.

This talk is organized by Wei Ai