log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Insights on learning representations with dictionary learning and autoencoders
Wednesday, November 30, 2016, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

The success of prediction algorithms relies heavily on the data representation. Representation learning reduces the need for feature engineering, with notable successes in applications using neural networks and dictionary learning. In this talk, I will discuss new insights into effectively learning representations, particularly through supervised dictionary learning and supervised autoencoders. In particular, I will discuss new results on obtaining globally optimal solutions, and provide simple algorithms that are amenable to incremental estimation. Further, I will highlight how techniques from dictionary learning can inform choices in supervised autoencoders, and lead to a more effective supervised representation learning architecture.

Bio

Martha White is an assistant professor of Computer Science at Indiana University Bloomington. She received her PhD in Computer Science from the University of Alberta. Her primary research goal is to develop learning algorithms for autonomous agents learning on streams of data. To achieve this goal, her research focus is on developing practical algorithms for reinforcement learning and representation learning, including parameter-free, sample-efficient learning methods for policy evaluation and control in reinforcement learning and principled optimization approaches for sparse coding, kernel representations and neural networks.

This talk is organized by Naomi Feldman