Deep probabilistic generative models are flexible models of complex and high dimensional data. They have found numerous applications, e.g. vision, natural language processing, chemistry, biology, and physics. They are also important components of model-based reinforcement learning algorithms. This widespread use of deep generative models urges the need to understand how they work and where they fall short. In this talk I will discuss two main learning frameworks for deep generative models, the variational autoencoder (VAE) and the generative adversarial network (GAN). I will highlight their shortcomings and introduce two of my works that propose a solution to those shortcomings. More specifically, I will discuss my works on reweighted expectation maximization and entropy-regularized adversarial learning as alternatives to the VAE and the GAN approaches respectively.
Adji is a Ph.D student in the department of Statistics at Columbia University where she is jointly being advised by David Blei and John Paisley. Her doctoral work is about deep generative models. More specifically, she designs algorithms for fitting deep generative models and combines probabilistic modeling and deep learning to embed structure into deep generative models. Prior to joining Columbia she worked as a Junior Professional Associate at the World Bank. She did her undergraduate training in France where she attended Lycee Henri IV and Telecom ParisTech--France's Grandes Ecoles system.