log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Privacy for the People: Designing systems that shift power over personal data to end-users
Zoom: https://umd.zoom.us/j/98095131895?pwd=bFRySUJZSytQcjFVVis0dFpuWU1TZz09
Tuesday, April 12, 2022, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Many people are frustrated and concerned with how corporations, intelligence agencies, and other adversarial actors harvest their personal data. Yet most feel powerless to protect themselves or effect change. How might we design computing systems that empower people with greater agency over their personal data? In this talk, I will describe my research exploring three interdisciplinary, human-centered approaches to address this question. First, my lab is working with material scientists and electrical engineers to develop smart physical barriers that provide physical privacy guarantees against intrusive ambient sensors in IoT and edge computing devices. I will describe our design and evaluation of a Smart Webcam Cover (UbiComp’22) that uses physical world signals to automatically obfuscate laptop webcams when users do not intend them to be in use. Second, we are working with legal scholars to design systems that facilitate grassroots privacy collective action to protest intrusive institutional personal data practices. I will describe results from a deployment of our design probe, Unified Voice (CHI’22), that employs crowd programming patterns to shepherd collectives in generating and converging on concrete demands for redress in the wake of institutional privacy violations. Third, we are working with human-centered AI and design researchers to develop Privacy through Design: a design methodology to help practitioners model the trade-offs between utility and intrusiveness in competing consumer AI design concepts. I will describe our work on assessing the utility versus intrusiveness of Dynamic Audience Selection (CSCW’21) on Facebook: a design concept that allows users to use natural language to articulate who should and should not be able to see their posts. My work presages a future in which well-meaning technologists and end-users can exert leverage over the institutions and adversarial actors that seek to harvest their personal data.

Bio

 

Dr. Sauvik Das is an Assistant Professor at the School of Interactive Computing at Georgia Tech, where he directs the Security, Privacy, Usability and Design (SPUD) Lab. He employs an interdisciplinary, human-centered approach in building systems that encourage end-users to adhere to expert-recommended security and privacy behaviors and that help combat institutional privacy violations. His work has been featured widely in The Atlantic, The Financial Times, Slate, and venues in the popular press. He has published 33 peer-reviewed papers at premier venues across human-computer interaction, social computing, and security/privacy, and is PI on five awards from NSF and industry totaling over $2.5 million in funding. He’s received best paper and honorable mention awards at ACM CHI (3x), ACM CSCW, ACM Ubicomp, and USENIX SOUPS; in addition, he received an honorable mention for NSA’s Best Scientific Cybersecurity Paper award in 2014. He is a NDSEG fellow, Qualcomm Innovation fellow, Stu Card fellow, NSF EAPSI fellow, and recent NSF CAREER award recipient.

This talk is organized by Dana Purcell