log in  |  register  |  feedback?  |  help  |  web accessibility
Towards Provably Efficient Quantum Algorithms for Nonlinear Dynamics and Large-scale Machine Learning Models
Jin-Peng Liu - UC Berkeley and MIT
Wednesday, March 29, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. Based on quantum Carleman linearization and shadow tomography (QRAM is not necessary), we design the first quantum algorithm for training classical sparse neural networks with end-to-end settings. Our quantum algorithm could provide provably efficient resolutions for generic (stochastic) gradient descent in T^2 polylog(n), where n is the size of the models and T is the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse.. We benchmark instances of training ResNet from 7 to 103 million parameters with sparse pruning applied to the Cifar-100 dataset, and we find that a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows that fault-tolerant quantum computing can contribute to the scalability and sustainability of most state-of-the-art, large-scale machine learning models. 


[1] Towards provably efficient quantum algorithms for large-scale machine learning models, arXiv:2303.03428.

This talk is organized by Andrea F. Svejda