Statistical Analysis on Transfer Learning

发布者:梁慧丽发布时间:2025-05-08浏览次数:10

报告人

Caixing Wang

The Chinese University of Hong Kong

时间

2025年5月13日 星期二

下午 14:00-15:00

地点

102报告厅


Abstract


Transfer learning is a powerful machine learning technique that leverages knowledge from source tasks to improve learning efficiency of target task. It has achieved remarkable success in fields such as computer vision, autonomous driving, and large language models. However, the theoretical foundations of transfer learning remain underdeveloped. In this talk, I will introduce some statistical frameworks for transfer learning, addressing key questions on how to measure the similarity between source and target domains, how this similarity affects the efficiency of transfer learning, and how much benefit transfer learning can bring to the target task. Specifically, I will focus on two common scenarios in transfer learning: covariate shift and posterior drift. For each, I will present some practical algorithms and theoretical results that help us understand when and how transfer learning works. Simulated examples and real data analysis further demonstrate the effectiveness of the proposed methods and theoretical results.


Biography


Caixing Wang is currently a postdoctoral researcher in the Department of Statistics at The Chinese University of Hong Kong. He received his Ph.D. degree from the School of Statistics and Data Science, Shanghai University of Finance and Economics. His research interests include statistics machine learning and large-scale data analysis. His has published papers in leading journals and conferences of machine learning and statistics, including JMLR, JCGS, NeurIPS and ICML.


搜索
您想要找的