Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in classical AI and philosophy, defeasible inference has not been extensively studied in the context of contemporary data-driven research on natural language inference and common-sense reasoning. In this talk, I will present a collection of new datasets and tasks aimed at defeasible reasoning in a natural language setting.
Rachel Rudinger is a new Assistant Professor in the Department of Computer Science at the University of Maryland, and a member of UMIACS and Computational Linguistics and Information Processing Lab. She completed her Ph.D. in Computer Science from the Johns Hopkins Center for Language and Speech Processing in 2019 and was a Postdoctoral Young Investigator at the Allen Instutite for AI in Seattle in 2019-2020. Her research focuses on problems in natural language inference, common-sense reasoning, and bias and fairness in NLP.