log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Learning Decision Making Systems under (Adversarial) Distribution Shifts
Furong Huang
IRB 0318
Friday, October 22, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Also on zoom - https://umd.zoom.us/j/96718034173?pwd=clNJRks5SzNUcGVxYmxkcVJGNDB4dz09

Reinforcement learning is an effective way to model an interactive real-time decision-making process and is grounded in applications such as robotics, personalized healthcare systems, autonomous driving systems, and market making systems. In reinforcement learning, the agent has to explore the unknown environment in order to make informed decisions. However, the required lengthy explorations during the learning may expose our agents to danger as they often make suboptimal or even random explorations. In addition, the agent could fail catastrophically under perturbations or even adversarial attacks. 

 

 

In this talk, I will outline some methods for learning to learn, learning to adapt and learning to generalize in RL. I will also present a robust learning paradigm under adversarial perturbations in RL decision making systems. 

 

Bio

Furong Huang is an Assistant Professor of the Department of Computer Science at UMD. She works in statistical machine learning, security in machine learning, reinforcement learning, deep learning theory and federated learning. Dr. Huang is a recipient of the NSF CRII Award, the MLconf Industry Impact Research Award, the Adobe Faculty Research Award and two JP Morgan Faculty Research Awards. 

This talk is organized by Richa Mathur