log in  |  register  |  feedback?  |  help  |  web accessibility
Social, Cultural, and Political biases and Blind Spots in AI Systems
Wednesday, April 6, 2022, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

In the first part, I will talk about how societal and cultural biases get reflected in popular entertainment content designed by a handful and consumed by many. Specifically, I will describe a few key findings from our study on popular Bollywood and Hollywood movies spanning the last seven decades.

In the second part, I will describe a new methodology that offers a fresh perspective on interpreting and understanding political and ideological biases through machine translation. Focusing on a year that saw a raging pandemic, sustained worldwide protests demanding racial justice, an election of global consequence, and a far-from-peaceful transfer of power, I will show how our methods can shed light on the deepening political divide in the US.

In the final part, I will show why AI systems deployed to serve millions need rigorous checks and balances. I will introduce inappropriate content hallucination -- a novel paradigm where the inappropriate content is not present in the source yet is hallucinated by state-of-the-art systems. I will show real-world examples highlighting the potential risks of over-reliance on AI systems.

Zoom: https://umd.zoom.us/j/98806584197?pwd=SXBWOHE1cU9adFFKUmN2UVlwUEJXdz09

(passcode if needed: clip)


Ashique KhudaBukhsh is an assistant professor at the Golisano College of Computing and Information Sciences, Rochester Institute of Technology (RIT). His current research lies at the intersection of NLP and AI for Social Impact as applied to: (i) globally important events arising in linguistically diverse regions requiring methods to tackle practical challenges involving multilingual, noisy, social media texts; and (ii) polarization in the context of the current US political crisis. In addition to having his research been accepted at top artificial intelligence conferences and journals, his work has also received widespread international media attention that includes multiple coverage from BBC, Wired, Salon, The Independent, VentureBeat, and Digital Trends.

Prior to joining RIT, Ashique was a Project Scientist at the Language Technologies Institute, Carnegie Mellon University (CMU) mentored by Prof. Tom Mitchell. Prior to this, he was a postdoc mentored by Prof. Jaime Carbonell at CMU. His PhD thesis (Computer Science Department, CMU, also advised by Prof. Jaime Carbonell) focused on distributed active learning.

This talk is organized by Wei Ai