The Impact of Prompt Engineering on the Accuracy and Usefulness of AI Chatbot Responses for Undergraduate Students
Presenter: Elisa Yedid Granados
Faculty Sponsor: Muzzo Uysal
School: UMass Amherst
Research Area: Artificial Intelligence
Session: Poster Session 1, 10:30 AM - 11:15 AM, Auditorium, A84
ABSTRACT
As AI chatbots become a part of the academic environment, undergraduate students have been increasingly relying on them for research support, a better understanding of concepts, or even problem-solving. However, their popularity does not ensure the accuracy and usefulness of AI responses, specifically when prompts are vague and do not guide the Large Language Model to the correct outcomes. While there are several articles that support the importance of prompt engineering for guiding LLMs, they have not been supported by empirical evidence regarding the use of AI among undergraduate students. This study’s objective is to determine whether structured and engineered prompts lead to better academic outcomes in terms of accuracy and usefulness compared to basic, unstructured prompts. Additionally, the goal is to understand how prompt engineering affects cognitive load and perceived clarity of undergraduate students. The research will use an experimental design supported by survey instruments in which participants evaluate and compare two types of prompts: unstructured prompts and engineered prompts that include clear instructions and specific keywords. Both prompt types will be used to complete the same academic tasks, and the resulting chatbot responses will be assessed for accuracy, clarity, and usefulness. The study expects to collect approximately 70 participant responses from undergraduate students. The findings may help universities develop more effective AI literacy training, encourage responsible use of chatbots among students, and guide organizations developing AI systems on improving prompt design.