log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Beyond Minimax: Learning Faster If You Can
Nishant Mehta - Centrum, Wiskunde & Informatica
Tuesday, February 28, 2017, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Most machine learning theory is focused on studying the performance of systems under worst-case or even adversarial  circumstances.  However, the world does not always confront a learning agent with worst-case problems. The problem instances that arise in real-world tasks are often easier than the worst case, requiring less computational resources, less data, and lower data procurement costs. In this talk, I will discuss the notion of easiness in two contexts: statistical learning with fixed features and also problems where we learn the features (known as learning a  representation). After giving an overview of the statistical learning setting and the idea of easiness in learning, I'll focus on my recent work in learning at faster rates under a wide-reaching condition known as stochastic mixability. I'll then shift to my work on refining measures of how much data is needed to learn, and provide theoretical justification for learning using less data. This part of the talk will focus primarily on my work in learning a dictionary for sparse coding in a supervised way. Lastly, I'll close with future directions.

Bio

Nishant Mehta is a researcher at Centrum Wiskunde & Informatica, working with Peter Grünwald. He previously was a postdoctoral research fellow at the Australian National University and NICTA with Bob Williamson. He completed his Ph.D. in Computer Science from Georgia Tech, advised by Alexander Gray,  and obtained a B.S. in Computer Science from Georgia Tech. Nishant's research is in statistical learning theory and, more broadly, machine learning. A unifying theme of his work is to identify how to build learning systems that perform well with much less data than is currently being used.

This talk is organized by Adelaide Findlay