log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Algorithmic Bias in Artificial Intelligence: The Seen and Unseen Factors Influencing Machine Perception of Images and Language
Wednesday, February 15, 2017, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

The success of machine learning has recently surged, with similar algorithmic approaches effectively solving a variety of human-defined tasks.  Tasks testing how well machines can perceive images and communicate about them have exposed strong effects of different types of bias, such as selection bias and dataset bias.  In this talk, I will unpack some of these biases, and how they affect machine perception today.  I will introduce and detail the first computational model to leverage human Reporting Bias -- what people mention -- in order to learn ground-truth facts about the visual world.

Bio
 
I am a Senior Research Scientist in Google's Research & Machine Intelligence group, working on advancing artificial intelligence towards positive goals, as well as ethics in AI and demographic diversity of researchers.  My research is on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process.  My work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.
 
Before Google, I was a founding member of Microsoft Research's "Cognition" group, focused on advancing vision-language artificial intelligence. Before MSR, I was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where I mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme. 
 
Before that, I was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where I focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter. 
 
I spent a good chunk of 2008 getting a Master's in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia.  Simultaneously (2005 - 2012), I worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. My title changed with time (research assistant/associate/visiting scholar), but throughout, I worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders under Brian Roark.
 
I continue to balance my time between language generation, applications for clinical domains, and core AI research.
 
 
This talk is organized by Naomi Feldman