利用大语言模型支持不同的学习任务——《信息资源建设》课的实证研究 Supporting different learning tasks with large language models – a field experiment in the course information resource development
Loading...
Authors
Liang, Xingkun
Issue Date
2025
Educational Level
ISCED Level 6 Bachelor’s or equivalent
Curriculum Area
Geographical Setting
China
Abstract
Context: The undergraduate course "Information Resource Development" at Peking University addresses the development and management of various types of information resources. It faces two persistent teaching challenges: the abstract nature of foundational concepts and the difficulty of simulating practical tasks within limited class time. With the rise of large language models (LLMs) such as ChatGPT, there is growing interest in exploring their role in improving learning engagement and personalisation.
Aims: This study explored how LLMs could enhance learning in the course, focusing on two core issues: making abstract content more engaging and supporting students in developing practical skills. A further aim was to examine how the integration of AI tools might contribute to students' AI literacy within the context of professional education.
Methods: A randomised controlled experiment was conducted with 37 students across four tasks reflecting different learning goals: factual knowledge, theoretical understanding, causal reasoning, and critical thinking. Students were divided into an AI-supported group using the LLM ‘Wenxin Yiyan (Ernie Bot)’ and a control group using traditional resources. Learning outcomes were assessed using t-tests and regression analysis. Post-experiment interviews explored students’ strategies and experiences.
Findings: Students using the LLM performed significantly worse on tasks requiring factual accuracy and critical thinking. AI often generated inaccurate data and produced repetitive viewpoints. However, no significant differences were observed in tasks involving theory or causal reasoning, where LLMs offered quick overviews and illustrative examples. Interview data reflected overall cautious attitudes toward AI, with students noting both potential and limitations.
Implications: The findings suggest that current LLMs may be helpful for introductory exploration of theoretical content but less effective for tasks requiring precision or original thought. Teachers might learn that thoughtful integration of AI tools depends on task type and critical guidance.
Aims: This study explored how LLMs could enhance learning in the course, focusing on two core issues: making abstract content more engaging and supporting students in developing practical skills. A further aim was to examine how the integration of AI tools might contribute to students' AI literacy within the context of professional education.
Methods: A randomised controlled experiment was conducted with 37 students across four tasks reflecting different learning goals: factual knowledge, theoretical understanding, causal reasoning, and critical thinking. Students were divided into an AI-supported group using the LLM ‘Wenxin Yiyan (Ernie Bot)’ and a control group using traditional resources. Learning outcomes were assessed using t-tests and regression analysis. Post-experiment interviews explored students’ strategies and experiences.
Findings: Students using the LLM performed significantly worse on tasks requiring factual accuracy and critical thinking. AI often generated inaccurate data and produced repetitive viewpoints. However, no significant differences were observed in tasks involving theory or causal reasoning, where LLMs offered quick overviews and illustrative examples. Interview data reflected overall cautious attitudes toward AI, with students noting both potential and limitations.
Implications: The findings suggest that current LLMs may be helpful for introductory exploration of theoretical content but less effective for tasks requiring precision or original thought. Teachers might learn that thoughtful integration of AI tools depends on task type and critical guidance.
Description
Keywords (free text)
information resource development , large language models , library and information science