log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Dissertation Defense: Efficient Models and Learning Strategies for Resource-Constrained Systems
Tahseen Rabbani
Wednesday, March 13, 2024, 3:30-5:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Abstract:
Over the last decade, two trends have become readily apparent in machine learning (ML): models (1) continue to improve in task performance and (2) they continue to grow larger. In parallel, research and development on the Internet of Things (IoT) has similarly seen dramatic leaps in quality and popularity. Yet, due to computational and memory constraints, deep learning on clients such as mobile devices is limited to comparatively tiny model sizes. Encouragingly, conventional techniques such as knowledge distillation, pruning, and the lottery ticket hypothesis suggest that massive architectures contain highly-performant, smaller models. However, such existing approaches require large base/teacher models, expensive neural architecture searches, or pre-training.

In this talk, we introduce multiple, novel strategies for (1) reducing the scale of deep neural networks and (2) faster learning. For the size problem (1), we leverage tools such as tensorization, randomized projections, and locality-sensitive hashing to train on reduced representations of large models without sacrificing performance. For learning efficiency (2), we develop algorithms for cheaper forward passes, accelerated PCA, and asynchronous gradient descent. Several of these methods are tailored for federated learning (FL), a private, distributed learning paradigm where data is decentralized among resource-constrained edge clients. We are exclusively concerned with improving efficiency during training as opposed to concurrent training of a teacher model or pruning/reduction of a large pre-trained base model.

 

Bio

Bio:
Tahseen Rabbani is a Ph.D. candidate in the Department of Computer Science at the University of Maryland, College Park, advised by Dr. Furong Huang. His research broadly encompasses distributed learning, privacy, compression, and training efficiency. He is an RSAC Security Scholar (2024), NSF COMBINE Fellow, and Apple Scholars in AI/ML Nominee (2022).

This talk is organized by Migo Gui