log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: AI on a budget: Sequential Decision Making Problems with Resource Constraints
Karthik Sankararaman
Friday, July 19, 2019, 1:00-3:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

One of the goals of Artificial Intelligence (AI) is to enable multiple agents to interact, co-ordinate and compete with each other to realize various goals. Typically, this is achieved via a system which acts as a mediator to control the agents' behavior via incentives. Such systems are ubiquitous and include online systems for shopping (e.g., Amazon), ride-sharing (e.g., Uber, Lyft) and Internet labor markets (e.g., Mechanical Turk). The main algorithmic challenge in such systems is to ensure that they can operate under a variety of informational constraints such as uncertainty in the input, committing to actions based on partial information or being unaffected by noisy input. The mathematical framework used to study such systems are broadly called sequential decision making problems where the algorithm does not receive the entire input at once; it obtains parts of the input by interacting (also called ``actions'') with the environment.

In this thesis, we answer the question, under what informational constraints can we design efficient algorithms for sequential decision making problems.

The first part of the thesis deals with the Online Matching problem. Here, the algorithm deals with two prominent constraints: uncertainty in the input and choice of actions being restricted by a combinatorial constraint. We design several new algorithms for many variants of this problem and provide provable guarantees. We also show their efficacy on the ride-share application using a real-world dataset. In the second part of the thesis, we consider the Multi-armed bandit problem with additional informational constraints. In this setting, the algorithm does not receive the entire input and needs to make decisions based on partial observations. Additionally, the set of possible actions is controlled by global resource constraints that bind across time. We design new algorithms for multiple variants of this problem that are worst-case optimal. We provide a general reduction framework to the classic multi-armed bandits problem without any constraints. We complement some of the results with preliminary numerical experiments.

 

Examining Committee: 
 
                          Chair:               Dr. Aravind Srinivasan
                          Dean's rep:         Dr. Prakash Narayan
                          Members:        Dr. Aleksandrs Slivkins
                                                    Dr. Furong Huang
                                                 Dr. John Dickerson
                                                    Dr. Tom Goldstein
This talk is organized by Tom Hurst