log in  |  register  |  feedback?  |  help  |  web accessibility
Computational Algorithms and Theories for Learning with Big Data
Tuesday, March 13, 2018, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
 The scale and dimensionality of data associated with machine learning applications have seen unprecedented growth in the last decade. Although it has brought great opportunities in various domains, there still remain many challenges for solving big data learning problems. In this presentation, I will focus on the computational perspective for learning with big data, and will talk about challenges and opportunities for solving optimization problems arising in machine learning. In particular, I will present a new stochastic optimization algorithm for solving large-scale problems, and randomized machine learning algorithms for tackling high-dimensional data. I will also discuss opportunities for learning with big data through the lens of computational learning theory.


Dr. Tianbao Yang is currently an assistant professor at the University of Iowa. He received his Ph.D. degree in Computer Science from Michigan State University in 2012. Before joining UIowa, he was a researcher in NEC Laboratories America at Cupertino (2013-2014) and a Machine Learning Researcher in GE Global Research (2012-2013), mainly focusing on developing deep learning algorithms and distributed optimization systems for machine learning applications.  Dr. Yang has board interests in machine learning and he has focused on several research topics, including online learning, distributed optimization, stochastic optimization, deep learning, and learning theory. His recent research interests revolve around accelerating convex optimization algorithms using error bound conditions and designing first-order algorithms for escaping from saddle points in non-convex optimization. He has won the Best student paper award at 25th Conference on Learning Theory (COLT) in 2012. He is an associate editor of the Neurocomputing Journal.
This talk is organized by Brandi Adams