报告人
Shaokui Wei
The Chinese University of Hong Kong, Shenzhen (CUHKSZ)
时间
2025年6月10日 星期二
下午 14:00-15:00
地点
602 会议室
Abstract
Machine learning models are increasingly integrated into critical systems across industries, from healthcare and finance to autonomous vehicles. However, as these models become more widely adopted, they also become targets for backdoor attack where attackers subtly alter the model's behavior by injecting harmful triggers during training. These attacks are especially dangerous because they can remain undetected until exploited, undermining the trust and reliability of AI systems.
Biography
Shaokui Wei is a final-year Ph.D. candidate at the School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHKSZ), supervised by Professor Hongyuan Zha. From September 2021 to May 2024, he also collaborated with Professor Baoyuan Wu. He earned his Bachelor's degree in Electronic Information Engineering (ranked No. 1, first-class honors) from CUHKSZ’s School of Science and Engineering (SSE). His research centers on Trustworthy Machine Learning, with a focus on security and fairness. Shaokui has published 10 papers in top-tier journals and conferences, including IJCV, ICCV, ICLR, and NeurIPS, and is the (co-)first author of four papers among them, including those in AISTATS'23, NeurIPS'23, and NeurIPS'24. Shaokui is a key contributor to BackdoorBench, a leading benchmark in backdoor learning, and is also the holder of three patents related to backdoor defense and adversarial mitigation. Shaokui has served as a Guest Speaker in ICCV Tutorial, a Top Reviewer for ICML 2025 and a program committee member for leading conferences such as NeurIPS, ICML, CVPR, AAAI, and ICCV.




