Improving Reasoning Consistency and Robustness in Language Models

发布者:梁慧丽发布时间:2025-05-08浏览次数:10

报告人

Siyuan Wang

University of Southern California (USC)

时间

2025年3月12日 星期三

上午 10:00-11:00

地点

308会议室


Abstract


Recent decades have witnessed remarkable advancements in language model reasoning capabilities, with frontier models like OpenAI's O1 and DeepSeek's R1 demonstrating exceptional performance across diverse tasks. However, maintaining consistent and robust reasoning in complex problems remains a longstanding challenge and an active area of research. In this talk, I will present our research efforts to enhance LMs’ reasoning consistency and robustness from three aspects: (1) Neuro-symbolic integration for explicitly modeling logical structures and inference; (2) General rule generation and application to ensure more consistent reasoning; (3) Multi-agent cooperative training and dynamic tree-search inference to mitigate reasoning errors. Building on these advancements, I will also introduce our work on evaluating and applying LM reasoning in diverse scenarios, including dynamic benchmarks that test adaptability and performance in real-world settings.


Biography


Siyuan Wang is a postdoctoral research associate at the University of Southern California (USC), where she  focuses on research in LLMs and NLP, particularly in LLMs Reasoning, Safety and Alignment. She obtained her bachelor’s degree and Ph.D. from Fudan University. As a first author or co-first author, she has published 12 papers in CCF A/B conferences and journals.


搜索
您想要找的