log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Everything efficient all at once - Compressing data and deep networks
Sharath Girish
Thursday, October 10, 2024, 11:00 am-1:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
 
Abstract:
Over the past decade, there has been a surge in the use of bulky deep networks demanding significant memory and computation resources limiting their deployment on many edge devices with storage and power constraints. They rely on more data which is being created and transmitted at an exponential rate. My talk will introduce a unified framework to reduce memory and computation costs and explore its application for data compression via efficient representations.


In the first part, I will discuss the framework for compressing convolutional neural networks (CNNs). We use quantized latent representations to improve storage efficiency on disk. We simultaneously achieve sparsity in the network to improve computational efficiency. 

The second part of my talk will focus on the application of the framework for data compression via implicit neural representations (INRs). We develop a method to compress multi-scale hash grid INRs for various forms of data such as images, videos, and even 3D scene representations. I will also discuss our work on video-specific compression exploiting the inherent spatio-temporal redundancies present.

Next, I will cover methods to improve the efficiency of 3D Gaussian Splatting (3D-GS) as an explicit 3D representation. I will begin by introducing a training framework for static scene 3D-GS, which enhances training and rendering speeds while reducing storage and runtime memory requirements. I will then extend this approach to dynamic scenes in a streamable setting using efficient per-frame deformable 3D-GS. Our joint quantization-sparsity framework, combined with an adaptive masking technique, significantly reduces training time and memory usage while maintaining real-time rendering speeds and high reconstruction quality.
 
Bio

Sharath Girish is a PhD student at the University of Maryland, College Park, advised by Prof. Abhinav Shrivastava. His research mainly focuses on accelerating and compressing deep networks. He is also interested in learning efficient and compact neural representations for data such as images, videos, and 3D scenes.

This talk is organized by Migo Gui