log in  |  register  |  feedback?  |  help  |  web accessibility
Curriculum Learning: Scores, Plans, Dynamics, and NLP
Wednesday, October 12, 2022, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Curriculum learning is an effective and natural strategy in human learning. It plays an important role in challenging tasks such as language learning. However, current machine learning (ML) paradigms are mostly built upon repeatedly practicing the same training data/tasks with a random order, which is non-adaptive to the learning process. Moreover, they do not plan multiple learning stages in advance as humans.

In the first part of this talk, I will introduce several novel formulations of curriculum learning, which lead to theoretically motivated and practically effective algorithms for a broad class of ML problems, e.g., supervised learning, semi-supervised learning, noisy-label learning, and diverse ensemble learning. We observe significant advantages of curriculum learning on weakly-labeled data. The second part of this talk will present a line of our works that develop scores from analyzing the training dynamics. Compared to the commonly used instantaneous feedback (e.g., the loss per step), these scores can save a lot of computation and capture richer information about the loss landscape's sharpness and the training inconsistency, hence improving the efficiency and test-set performance in experiments.

In the third part, I will discuss several potential forms of curriculum learning for pre-training language models or for model adaptation to downstream NLP tasks. According to the structures of NLP data, we can develop curricula at different levels. We will also discuss the design of scores and learning schedules for NLP models/tasks.  


Tianyi Zhou (https://tianyizhou.github.io) is a tenure-track assistant professor of computer science at the University of Maryland, College Park. He received his Ph.D. from the school of computer science & engineering at the University of Washington, Seattle. His research interests are in machine learning, optimization, and natural language processing (NLP). His recent works study curriculum learning that can combine high-level human learning strategies with model training dynamics to create a hybrid intelligence. The applications include semi/self-supervised learning, robust learning, reinforcement learning, meta-learning, ensemble learning, etc. He published >70 papers and is a recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE Computer Society TCSC Most Influential Paper Award.

This talk is organized by Rachel Rudinger