Brian Hill
Additional contact information
Brian Hill: HEC Paris
Abstract: There is increasing speculation about the future role of ChatGPT and other artificial intelligence (AI) chatbots aiding humans in a variety of tasks. But do people do better when aided by these tools, as compared to when they complete tasks on their own? Can they properly evaluate and where necessary correct the responses provided by ChatGPT to enhance their performance? To investigate this question, this study gives university-level students class assignments involving both answering questions and correcting answers provided by ChatGPT. It finds a significant reduction in student performance when correcting a provided response as compared to when they produce an answer from scratch. One possible explanation for this discrepancy could be the confirmation bias. Beyond emphasising the need for continued research into human interaction with AI chatbots, this study exemplifies one potential way of bringing them into classroom: to raise awareness of the pitfalls of their improper use.
Keywords: ChatGPT; Human-AI chatbot interaction; Confirmation bias; Class assignments; AI in education; Future of work
9 pages, June 1, 2023
Full text files
papers.cfm?abstract_id=4465833 HTML file Full text
Questions (including download problems) about the papers in this series should be directed to Antoine Haldemann ()
Report other problems with accessing this service to Sune Karlsson ().
RePEc:ebg:heccah:1473This page generated on 2024-09-13 22:19:53.