log in  |  register  |  feedback?  |  help  |  web accessibility
Ultra-scale Machine Learning Algorithms with Theoretical Foundations
Heng Huang
IRB-4105, Virtual- https://umd.zoom.us/j/98095131895?pwd=bFRySUJZSytQcjFVVis0dFpuWU1TZz09
Tuesday, May 3, 2022, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Machine learning and artificial intelligence are gaining fresh momentum, and have helped us enhance not only many industrial and professional processes but also our everyday living. The recent success of machine learning heavily relies on the surge of big data, big models, and big computing. However, the inefficient algorithms often restrict the applications of machine learning to very large-scale tasks. In terms of big data, serious concerns, such as communication overhead and convergence speed, should be rigorously addressed when we train learning models using large amounts of data located at multiple computers or devices. In terms of the big model, it is still an underexplored research area if a model is too big to train on a single computer or device. To address these challenging problems, we focused on designing new ultra-scale machine learning algorithms, efficiently optimizing and training models for big data problems, and studying new discoveries in both theory and applications.  


For the challenges raised by big data, we proposed multiple new asynchronous distributed stochastic gradient descent, coordinate descent, zeroth-order methods with variance reduction acceleration for efficiently solving convex and non-convex problems with faster convergence rate. We also designed new momentum fusion based algorithm with theoretical analysis for communication-efficient federated learning. For the challenges raised by the big model, we scaled up the deep learning models by parallelizing the layer-wise computations with a theoretical guarantee, which is the first algorithm (also theoretical guarantee on convergence) breaking the lock of backpropagation mechanism such that the large-scale deep learning models can be dramatically accelerated.


Dr. Heng Huang is John A. Jurenko Endowed Professor in Electrical and Computer Engineering at University of Pittsburgh, and also Professor in Biomedical Informatics at University of Pittsburgh Medical Center. Dr. Huang received the PhD degree in Computer Science at Dartmouth College. His research areas include machine learning, artificial intelligence, and biomedical data science. Dr. Huang has published more than 250 papers in top-tier conferences and many papers in premium journals, such as ICML, NeurIPS, KDD, IJCAI, AAAI, RECOMB, ISMB, ICCV, CVPR, Nature Machine Intelligence, Nucleic Acids Research, Bioinformatics, Medical Image Analysis, Journal of Machine Learning Research, IEEE TPAMI, TMI, TIP, TKDE, TNNLS, etc. Based on csrankings.org, for the last ten years, Dr. Huang has been ranked top 3 among researchers who published most top computer science and engineering conference papers. As PI, Dr. Huang currently is leading NIH R01s, U01, and multiple NSF funded projects on machine learning, AI, imaging-omics, precision medicine, electronic medical record data analysis and privacy-preserving, smart healthcare, and cyber physical system. Over the past 15 years, Dr. Huang received more than $35,000,000 research funding. He is a Fellow of AIBME and served as the Program Chair of ACM SIGKDD Conference 2020. 


This talk is organized by Richa Mathur