Over the past two years at Microsoft I spent a significant amount of time hanging out with, and working with, social scientists, legal scholars, economists, humanists, and computer scientists in the Fairness, Accountability, Transparency and Ethics (FATE) group. In this talk, I'll share some small pieces of what I learned in this expedition, focusing mostly on questions around risks of harms in NLP systems, needs of engineers of machine learning systems, and work bridging some of the gap between how humans construct explanations and how computers do.
Hal Daumé III wields a professor appointment in Computer Science and Language Science at the University of Maryland, and spends time as a principal researcher in the machine learning group and fairness group at Microsoft Research in New York City. He and his wonderful advisees study questions related to how to get machines to become more adept at human language, by developing models and algorithms that allow them to learn from data. The two major questions that really drive their research these days are: (1) how can we get computers to learn language through natural interaction with people/users? and (2) how can we do this in a way that promotes fairness, transparency and explainability in the learned models?