log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Proposal: How Technology Impacts and Compares to Humans in Socially Consequential Arenas
Samuel Dooley
Remote
Monday, January 10, 2022, 12:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
One of the main promises of technology development is for it to be adopted by people, organizations, societies, and governments — incorporated into their life, work stream, or processes. Often, this is socially beneficial as it automates mundane tasks, frees up more time for other more important things, or otherwise improves the lives of those who use the technology. However, these beneficial results do not apply in every scenario and may not impact everyone in a system the same way. Sometimes a technology is developed which produces both benefits and inflicts some harm. These harms may come at a higher cost to some people than others, raising the question: how are benefits and harms weighed when deciding if and how a socially consequential technology gets developed? The most natural way to answer this question, and in fact how people first approach it, is to compare the new technology to what used to exist. As such, in this work, I make comparative analyses between humans and machines in three scenarios and seek to understand how sentiment about a technology, performance of that technology, and the impacts of that technology combine to influence how one decides to answer my main research question.

In this work, I look at three such scenarios: (1) decision support tools, (2) facial analysis technology, and (3) Covid-19 technology. In the first setting, I explore a setting where human evaluators are tasked with finding the best individuals from a population (of people or things) and can pull on a variety of data sources to help them. An example of this is in mental health screening applications where a clinician with a variety of information sources (in-person sessions, audio recordings, social media posts) wants to find the most at-risk individuals from a population. In this area, I develop novel algorithms for this problem and evaluate the efficacy and comparative improvements on my algorithms when compared to human evaluators alone.

In the second setting, I compare how humans and machines are vulnerable to making errors in facial analysis technology. I explore errors in facial verification, identification, and detection. For facial verification and identification, I compare the biases exhibited by humans to those of machines and conclude similar biases exist for both. For facial detection, I examine the robustness of commercial systems to perturbations under synthetic, naturally-simulated noise corruptions, finding biases along age, gender, skin type, and lighting conditions.

Finally, with Covid-19, I show people’s perceptions about privacy and security have altered and been altered by the Covid-19 pandemic with data from field studies and survey collections. In all three settings, my findings from these three scenarios contribute to our understanding of the expansiveness of and the limits to technological interventions.

Examining Committee:
Chair:
Department Representative:
Members:
Dr. John Dickerson          
Dr. Philip Resnik 
Dr. Tom Goldstein 
Dr. Elissa Redmiles
 
Bio

Samuel Dooley is a fourth year PhD student at the University of Maryland, motivated by work that improves, studies, or changes how technology is used in applications with social impact. Together with my advisor, John P. Dickerson, he research existing technology and develop novel machine learning techniques which help people better use assistive technology and minimize technological harm.

This talk is organized by Tom Hurst