In this talk, I will show how to improve StyleGAN's image generation capabilities by incorporating simple illumination properties into the model. Our method, StyLitGAN, generates images with realistic lighting effects like shadows and reflections without any labeled, paired, or CGI data. I'll also demonstrate a near-perfect GAN inversion technique, Make It So, that outperforms previous SOTA GAN inversion methods by huge margins, able to invert and relight real scenes, even never seen out-of-domain images. Lastly, I'll show how we can have multiple scene properties predicted directly from a pretrained StyleGAN without updating or learning any new weight parameters. I will conclude by discussing their exciting implications for Generative AI.
Anand Bhattad is a Ph.D. student working with David Forsyth at UIUC. His research interest lies at the intersection of computer vision, computational photography, computer graphics, and machine learning. His current research focuses on neural rendering, image-based lighting, and generative AI. His recent work received a best paper nomination at CVPR 2022 (DIVeR). He was awarded outstanding emergency reviewer (CVPR 2021) and was rated as an excellent teaching assistant (2016). More information is available on his webpage: https://anandbhattad.