Toward LLM-Supported Automated Assessment of Critical Thinking Subskills

Presenter: Kushaan Naskar

Group Members: Payu Wittawatolarn, Brayden Liu

Faculty Sponsor: Andrew Lan

School: UMass Amherst

Research Area: Computer Science

Session: Poster Session 3, 1:15 PM - 2:00 PM, 163, C28

ABSTRACT

Critical thinking is a core competency in today’s education landscape, yet instructors still lack scalable ways to assess it and give students timely feedback. Our ongoing project explores whether we can automatically measure specific “subskills” that make up critical thinking in authentic student work. We focus on multiple data sources including student-written argumentative essays, and debates, where students synthesize sources, evaluate evidence, and use counterarguments.

We plan to develop a detailed coding rubric based on an established skills progression and use it for annotation across multiple critical thinking subskills. Building on this annotated dataset, we will investigate several approaches to automated scoring with large language models, including zero-shot prompting, few-shot prompting, and supervised fine-tuning, using both proprietary and open-source models.

In this poster, we will present our study design, rubric, and early qualitative insights from the annotation process, such as common challenges in distinguishing subskill levels and handling borderline cases. We will also outline our planned evaluation framework for comparing human and model-based scores and for examining where models succeed or struggle at the subskill level. Our long-term goal is to lay the groundwork for scalable, fine-grained assessment of critical thinking in student writing that can eventually support more targeted, pedagogically meaningful feedback in real classroom settings.


RELATED ABSTRACTS