log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Towards Inverse Rendering with Global Illumination
Saeed Hadadan
Wednesday, April 20, 2022, 12:30-2:30 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Neural representations have become increasingly popular in the graphics/vision community. The representational power of neural networks in high-dimensional spaces has led them to be leveraged to represent geometry, radiance, reflectance and visibility fields, to name but a few. That said, only few papers exist in photo-realistic rendering that leverage neural networks such that full global illumination effects are accounted for. In other words, inverse rendering that accounts for full global illumination effectively and efficiently is still a dream. With the prospect of making neural inverse rendering possible, such that it accounts for global illumination, we introduce two pieces of the puzzle: Neural Radiosity and Differentiable Neural Radiosity, two methods in which neural networks learn the radiance function and differential radiance function while accounting for global illumination.

We introduce Neural Radiosity, an algorithm to solve the rendering equation by minimizing the norm of its residual, similar as in classical radiosity techniques. Traditional basis functions used in radiosity, such as piecewise polynomials or meshless basis functions are typically limited to representing isotropic scattering from diffuse surfaces. Instead, we propose to leverage neural networks to represent the full four-dimensional radiance distribution, directly optimizing network parameters to minimize the norm of the residual. Our approach decouples solving the rendering equation from rendering (perspective) images similar as in traditional radiosity techniques, and allows us to efficiently synthesize arbitrary views of a scene.

We introduce Differentiable Neural Radiosity, a novel method of representing the solution of the differential rendering equation using a neural network. Inspired by neural radiosity techniques, we minimize the norm of the residual of the differential rendering equation to directly optimize our network. The network is capable of outputting continuous, view-independent gradients of the radiance field with respect to scene parameters, taking into account differential global illumination effects while keeping memory and time complexity constant in path length. To solve inverse rendering problems, we use a pre-trained instance of our network that represents the differential radiance field with respect to a limited number of scene parameters.

Examining Committee:
Department Representative:
Dr. Matthias Zwicker        
Dr. Soheil Feizi
Dr. Ramani Duraiswami

Saeed Hadadan is a PhD student at UMD CS department, advised by Professor Matthias Zwicker. He recieved his BSc. degree in Compute Engineering from Sharif University of Technology in 2019 and joined UMD the same year. His research focus is on the intersection of artificial intelligence and computer graphics, with the goal of enabling next generation AR/VR and computer graphics applications. Particularly, his recent research efforts aim at making inverse rendering while accounting for global illumination possible. As an active member of UMD community, he serves as a Graduate Student Government representative for CS department in addition to being a board member in ISSS-ISAB.

This talk is organized by Tom Hurst