
报告人:付书豪 博士
主持人:王 妍 教授
时间:2025年12月23日10:00-11:00
地点:信息楼133会议室
报告人简介:Shuhao is interested in understanding how AI can achieve human-like relational reasoning and compositional understanding. His research draws from cognitive science and machine learning, with a focus on bridging the gap between human and machine reasoning. During his Ph.D. he explored analogical reasoning in deep learning and structural models, demonstrating that domain-general structural models align more closely with human cognition. More recently, he has been investigating explicit relational representations in vision and multimodal models, examining how AI can move beyond object-centric representations to more structured scene understanding. He hopes to explore broader questions about how relational knowledge is represented and learned in both humans and AI, with potential applications in compositional generalization and multimodal reasoning. Shuhao completed his Ph.D. in Psychology at the University of California, Los Angeles, where he was advised by Professors Hongjing Lu and Ying Nian Wu.
报告内容介绍:
近年来,人工智能模型在多种推理任务中取得了接近甚至超过人类的表现,引发了关于“机器是否具备人类式推理能力”的广泛讨论。然而,仅凭任务准确率往往难以揭示推理能力背后的认知机制。本报告从认知科学的视角出发,系统比较人类与人工智能在推理能力上的异同。
首先,我将介绍一系列行为与计算实验,考察当前主流人工智能模型在组合性场景理解和抽象推理任务中的表现。结果表明,尽管模型在常见或训练分布内的问题上能够取得较高准确率,但在低频组合、反直觉关系或高度抽象的推理任务中,其行为模式与人类存在系统性偏离。这些偏离表明,当前模型往往缺乏显式、可组合的对象—关系表征,而更多依赖于统计相关性与任务特定的捷径策略。
在此基础上,报告将进一步探讨结构化表征在人工智能推理中的作用。通过引入基于对象、关系及其绑定的结构化表示,并结合通用的类比映射与比较机制,相关模型在零样本或少样本条件下展现出更强的泛化能力,其推理行为也更接近人类表现。这些结果表明,推理能力并非仅依赖于更大规模的数据或模型容量,而在很大程度上源于系统如何表示和操作结构化信息。
总体而言,理解和重建人类式推理能力的关键在于表征结构的设计与学习,这一视角不仅有助于更准确地评估当前人工智能系统的推理能力,也为发展更通用、更可靠的智能模型提供了重要启示。