Toward a Theoretical Understanding of Self-Supervised Learning in the Foundation Model Era

发布者:梁慧丽发布时间:2026-04-19浏览次数:14

主讲人

Yisen Wang

Peking University

时间

2026年4月20日 星期一

上午 10:00-11:00

地点

学院104会议室


Abstract


Despite  remarkable empirical success of Self-Supervised Learning (SSL), its  theoretical foundations remain relatively underexplored. This gap raises  fundamental questions about when and why SSL works, and what governs  its generalization and robustness. In this talk, I will introduce  representative SSL methodologies widely used in foundation models, and  then present a series of our recent works on the theoretical  understanding of SSL, with a particular focus on contrastive learning,  masked modeling and autoregressive learning.


Biography



Yisen  Wang is an Assistant Professor at Peking University (PKU). His research  focuses on the theoretical foundations of representation learning and  safety. He has published around 50 JMLR/TPAMI/ICML/NeurIPS/ICLR papers  and earned 14k+ Google Scholar citations. He also received 5 Best Paper  or Runner-up honors. Additionally, he serves as Senior Area Chair for  NeurIPS and Associate Editor for TPAMI.



搜索
您想要找的