log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Unsupervised Learning Revisited
Tuesday, February 20, 2018, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Modern datasets are massive, complex and often unlabeled. These attributes make unsupervised learning important in several data-driven application domains. This ever-growing area, although demonstrating excellent empirical performance, suffers from a major drawback. Unlike for classical learning methods, there is a lack of fundamental understanding of several modern approaches, hindering development of principled methods for improvement.
 
To resolve this issue, one approach is to draw appropriate connections between modern and classical learning methods. Thus, leveraging from the vast body of classical results, one can either develop new learning algorithms or improve existing ones in a principled way. In this talk, I am going to illustrate the success of this approach in two unsupervised learning problems, namely (1) learning a nonlinear dimensionality reduction of the data, and (2) learning probabilistic models from the data. In the first problem, by drawing connections with Maximal Correlation and PCA, our approach produces a new method called Maximally Correlated PCA, a nonlinear generalization of PCA with a data-dependent nonlinearity. In the second problem, by drawing connections to optimal transport, supervised learning and rate-distortion theory, our approach leads to a principled design of Generative Adversarial Networks (GANs) in a baseline scenario.
 
 

 

Bio

Soheil Feizi is a post-doctoral research scholar at Stanford University. He received his Ph.D. in Electrical Engineering and Computer Science (EECS) with a minor degree in Mathematics from the Massachusetts Institute of Technology (MIT). His research interests are in the area of machine learning and statistical inference. He completed a M.Sc. in EECS at MIT, where he received the Ernst Guillemin award for his thesis, as well as the Jacobs Presidential Fellowship and the EECS Great Educators Fellowship. He also received the best student award at Sharif University of Technology from where he obtained his B.Sc.

This talk is organized by Brandi Adams