One of the major mysteries in science is the towering success of machine learning. In this talk, I will present my work on advancing our theoretical understanding of learning and intelligence through the computational perspective. First, I will talk about the fundamental role of memory in learning, highlighting its importance in continual learning as well as decision making and optimization. Second, I will present an exponential improvement in swap-regret minimization algorithms, which achieves near-optimal computation/communication/
Binghui Peng is a fifth year Ph.D. student at Columbia University, advised by Christos Papadimitriou and Xi Chen. Previously, he studied Computer Science with the Yao Class in Tsinghua. He studies the theory of computation and develops algorithms and complexity theory for machine learning, artificial intelligence and game theory. His research works have addressed long-standing questions in learning theory and game theory, and his research papers were published in theory conferences (STOC/FOCS/SODA; in the latter he has best student paper award) and ML conferences (NeurIPS/ICLR/ACL).