AI-Powered Aging-Friendly Voice Interaction Research: Challenges, Practices, and Futures

发布者:梁慧丽发布时间:2025-05-08浏览次数:10

报告人

Zhigu Qian

Fudan University

时间

2025年4月15日 星期二

下午 14:00-15:00

地点

102 会议室


Abstract


Amid the dual challenges of global population aging and the rapid advancement of AI technologies, designing intelligent systems that can understand, adapt to, and effectively support older adults has become a pressing issue in Human-Computer Interaction. This research presents a systematic investigation into AI-enabled aging-friendly voice interaction systems for mobile applications, targeting the specific barriers older users face when using voice assistants.

The study identifies three core challenges: (1) insufficient understanding of natural spoken commands due to redundancy, ambiguity, and non-standard syntax; (2) missing task parameters in utterances, with limited capabilities for multi-turn clarification in existing systems; and (3) lack of intuitive feedback during automated operations, which leads to user anxiety and diminished trust.

To address these issues, the research proposes an integrated aging-friendly interaction framework covering three core components—intent understanding, task information coordination, and feedback mechanisms. Specifically, the study introduces a prompt-engineering method optimized for large language models to better interpret older adults’ spoken intents; a multi-turn dialogue mechanism designed for high-frequency elderly scenarios to iteratively complete task parameters; and a multimodal feedback strategy that integrates visual cues, interface highlights, and textual enhancements to improve user confidence and control.

A prototype system built upon these components was evaluated through real-world user studies, demonstrating superior performance over mainstream commercial voice assistants in terms of task success rate, information completeness, and user satisfaction. This work not only validates the practical value of AI in overcoming digital accessibility barriers for older adults, but also contributes theoretical insights and methodological foundations for advancing inclusive, human-centered AI design in aging societies.


Biography


Zhigu Qian is a Ph.D. candidate in Human-Computer Interaction at the School of Computer Science, Fudan University, advised by Prof. Yangfan Zhou. With a background in Sociology (B.A., East China Normal University), she brings an interdisciplinary approach that bridges technical innovation with social insights. Her research focuses on AI-powered assistive technologies for older adults, including multimodal understanding and context-aware interaction using large language models, as well as emotion-aware conversational systems that support older users’ social and emotional needs. As the first author, Zhigu has published four papers in top-tier HCI venues (CCF A), including CHI, CSCW, and IMWUT/Ubicomp. One of her CSCW papers received the ACM CSCW 2024 DEI Recognition Award (Top 4.03%). She also led the development of an aging-friendly voice interaction prototype system, which won the Silver Award at the 2024 Shanghai International Innovation and Entrepreneurship Competition. Her research agenda centers on two key pillars: (1) conducting systematic user studies to understand digital participation barriers faced by information-disadvantaged populations, and (2) transforming cutting-edge AI technologies into inclusive interaction design solutions that promote equitable computing and social justice. Her work contributes both to the theoretical foundations of human-centered AI and to practical innovations that address real-world accessibility challenges, with the broader goal of enabling more inclusive and sustainable digital futures.


搜索
您想要找的