In recent years, machine learning (ML) has achieved remarkable success by training large-scale models on vast datasets. However, building these models involves multiple interdependent tasks, such as data selection, hyperparameter tuning, and model architecture search. Optimizing these tasks jointly often leads to the challenging nested objectives, where each task both influences and depends on the others. In this talk, I will start by formalizing nested ML problems as bilevel optimization tasks and presenting efficient algorithms with theoretical guarantees that solve them. Then, I will extend these ideas to the federated learning context, examining how algorithmic designs must be adapted to meet the challenges of that environment.
Junyi Li is currently a PhD candidate at the department of Computer Science at the University of Maryland, College Park, advised by Prof. Heng Huang. His research focuses on developing machine learning models and algorithms with theoretical foundations, encompassing areas such as federated learning, foundation models, artificial general intelligence (AGI), large-scale distributed optimization, trustworthy AI and efficient machine learning. Junyi’s research has resulted in many papers in top-tier machine learning and AI venues, including NeurIPS, ICML, ICLR, KDD, AAAI, CVPR and NAACL.