log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Learning about Light without Labeled Data
Anand Bhattad
IRB 4105, Zoom Link- https://umd.zoom.us/j/7316339020
Monday, February 27, 2023, 12:00-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

In this talk, I will show how to improve StyleGAN's image generation capabilities by incorporating simple illumination properties into the model. Our method, StyLitGAN, generates images with realistic lighting effects like shadows and reflections without any labeled, paired, or CGI data. I'll also demonstrate a near-perfect GAN inversion technique, Make It So, that outperforms previous SOTA GAN inversion methods by huge margins, able to invert and relight real scenes, even never seen out-of-domain images. Lastly, I'll show how we can have multiple scene properties predicted directly from a pretrained StyleGAN without updating or learning any new weight parameters. I will conclude by discussing their exciting implications for Generative AI.

Bio

Anand Bhattad is a Ph.D. student working with David Forsyth at UIUC. His research interest lies at the intersection of computer vision, computational photography, computer graphics, and machine learning. His current research focuses on neural rendering, image-based lighting, and generative AI. His recent work received a best paper nomination at CVPR 2022 (DIVeR). He was awarded outstanding emergency reviewer (CVPR 2021) and was rated as an excellent teaching assistant (2016). More information is available on his webpage: https://anandbhattad.github.io/   

 
 
This talk is organized by Richa Mathur