Artificial Intelligence
The Future of Consciousness and AI
Presenter: Eli Avery Tripp
Faculty Sponsor: Jean Kennedy
School: Quinsigamond Community College
Research Area: Artificial Intelligence
Location: Poster Session 1, 10:30 AM - 11:15 AM: Campus Center Auditorium [A14]

Understanding consciousness has consistently proven to be an uphill battle among academics spanning numerous fields, from neuroscience to philosophy. Research has argued that popularized theories of consciousness are merely descriptive of the conditions in which consciousness can be present; however, approaches such as AST and QTC target the “why” behind its presence, but on two completely different levels that will be evaluated by this research. Within the rapid advancement of technology, such as advanced medicine, artificial intelligence, and quantum computing, lays subtle foreshadowing into technology's possible adoption of consciousness. The purpose of this research is to survey both current and foreseeable advancements across disciplines that study psychological, philosophical, and neuroscientific approaches to consciousness, and how emerging technologies and computation methods will affect them in the future. The characteristics of AI hold the potential to simulate consciousness through its machine learning, capacity for running complex simulations, and advanced prediction capabilities. Weak AI, AGI, and strong AI are the known conceptual models for contemporary and speculative variations of artificial intelligence. While first considering the qualities of the commonly understood weak AI, this research will proceed to examine the possibility of consciousness, as we understand it in biological organisms, as both a potential objective and biproduct of AGI and strong AI development. By applying theoretical approaches of consciousness to machine thinking, we can look toward what the future of AI may hold for human interactions, as well as our own understanding of consciousness as a structure beyond the brain. 

Artificial Intelligence in Anesthesia: A Comprehensive Analysis
Presenter: Sanea Naressa Eugene
Faculty Sponsor: Jean Kennedy
School: Quinsigamond Community College
Research Area: Artificial Intelligence
Location: Poster Session 1, 10:30 AM - 11:15 AM: Campus Center Auditorium [A18]

The discussion of the advancement of technology in the field of medicine has almost been a constant headline during the twenty-first century. The rise of Artificial Intelligence (AI) in recent years has created opportunities for its use in medical spaces while other technologies like Augmented Reality (AR) are used to simulate real-life patient-provider scenarios. The implementation of these technologies has given rise to a myriad of questions often about the implications of these devices. Secondary data analysis was employed in this research to determine the possible outcomes of the applications of AI technologies specifically in the field of anesthesia. The common opinion among medical professionals on the employment of AI in the field seems to be that it will maximize efficient patient care, create more opportunities for personalized care, and possibly increase patient safety through the perioperative process. However, this research seeks to address the issues of designer bias, user bias and issues surrounding patient privacy, HIPAA laws, when AI technology is implemented. It also seeks to address how the role of nurses and doctors in anesthesia change with the increased use of these technologies. This research aimed to find the intersection between these ideas and understand the possible repercussions, as well as the possible benefits not just to one sub-specialty in anesthesia, but to the entire field.


The Experience Gap: AI Adoption in the United States and Germany
Presenter: Anoushka Kunder
Faculty Sponsor: Brenda K. Bushouse
School: UMass Amherst
Research Area: Artificial Intelligence
Location: Poster Session 1, 10:30 AM - 11:15 AM: Campus Center Auditorium [A61]

Artificial Intelligence is rapidly transforming the technology sector, generating
widespread concern over the widening inequality gap for junior level developers, who reap little
to no measurable productivity gains from AI and are most vulnerable to job displacement. This
thesis examines how the United States can promote a policy framework to support displaced
workers due to AI. This study employs a mixed method research design. The quantitative
analysis aims to establish the scope of AI driven inequality between junior level developers
compared to senior level developers by drawing on longitudinal software developer productivity
and AI adoption data (Daniotti et al., 2026) as well as data on government investment in work
training programs (OECD). The quantitative analysis measures (1) AI adoption rates among
software developers, (2) AI productivity benefits between junior and senior level workers, and
(3) government spending responses for worker training after AI implementation in the
workforce. The qualitative portion of the study is a comparative policy analysis of the United
States and Germany, aiming to assess differences in the quality of the support institutional
systems provide for displaced workers. The qualitative literature review examines the Workforce
Innovation and Opportunity Act, the Registered Apprenticeship Program, union coverage, works
councils, and dual vocational education and training systems in both countries grounded in
Varieties of Capitalism Theory (Hall and Soskice, 2001). This study, overall, aims to assess
whether the AI driven inequality for junior level tech workers in the United States can be
mitigated through institutional design. 

Balancing Efficiency and Empathy: Artificial Intelligence Adoption in Human-Centered Human Resources
Presenter: Ainsley Louise Cicone
Faculty Sponsor: Muzzo Uysal
School: UMass Amherst
Research Area: Artificial Intelligence
Location: Poster Session 1, 10:30 AM - 11:15 AM: Campus Center Auditorium [A76]

This study explores how real HR managers use, or intentionally avoid using, AI in their day‑to‑day work. Through open, candid conversations with mid‑ to senior‑level HR professionals, the research examines both the benefits and the challenges of bringing AI into a field that is fundamentally human-centered. While AI can streamline tasks such as recruitment, performance management, and decision‑making, it also raises concerns related to trust, ethics, emotional impact, and whether organizations are truly ready for this technology. In addition to interviews, the study incorporates email-based communication to gather supplemental insights and clarify participants’ experiences, creating a mixed‑method approach that captures perspectives across different communication styles and comfort levels. Using semi‑structured interviews and thematic analysis guided by a prompt‑driven AI adoption model, the study investigates how factors such as task load, AI literacy, and organizational culture shape adoption decisions. It also considers moderating influences, including task complexity and HR expertisethat affect how AI is perceived and used. Ultimately, the goal is to understand how hybrid AI‑human systems can balance efficiency with empathy, preserve fairness and meaningful work, and influence broader outcomes like retention and organizational culture. This qualitative case study design, supported by purposive sampling across diverse organizational settings, allows for in‑depth, nuanced insights into how and why HR managers adopt or avoid generative AI in sensitive interpersonal tasks while addressing limitations in generalizability.

RELATED ABSTRACTS


The Impact of Prompt Engineering on the Accuracy and Usefulness of AI Chatbot Responses for Undergraduate Students
Presenter: Elisa Yedid Granados
Faculty Sponsor: Muzzo Uysal
School: UMass Amherst
Research Area: Artificial Intelligence
Location: Poster Session 1, 10:30 AM - 11:15 AM: Campus Center Auditorium [A84]

As AI chatbots become a part of the academic environment, undergraduate students have been increasingly relying on them for research support, a better understanding of concepts, or even problem-solving. However, their popularity does not ensure the accuracy and usefulness of AI responses, specifically when prompts are vague and do not guide the Large Language Model to the correct outcomes. While there are several articles that support the importance of prompt engineering for guiding LLMs, they have not been supported by empirical evidence regarding the use of AI among undergraduate students. This study’s objective is to determine whether structured and engineered prompts lead to better academic outcomes in terms of accuracy and usefulness compared to basic, unstructured prompts. Additionally, the goal is to understand how prompt engineering affects cognitive load and perceived clarity of undergraduate students. The research will use an experimental design supported by survey instruments in which participants evaluate and compare two types of prompts: unstructured prompts and engineered prompts that include clear instructions and specific keywords. Both prompt types will be used to complete the same academic tasks, and the resulting chatbot responses will be assessed for accuracy, clarity, and usefulness. The study expects to collect approximately 70 participant responses from undergraduate students. The findings may help universities develop more effective AI literacy training, encourage responsible use of chatbots among students, and guide organizations developing AI systems on improving prompt design.



Assessment Redesign and Ethical Governance for Generative Artificial Intelligence in Higher Education
Presenter: Vlad Renkas
Faculty Sponsor: Reena Randhir
School: Springfield Technical Community College
Research Area: Artificial Intelligence
Location: Poster Session 2, 11:30 AM - 12:15 PM: Campus Center Auditorium [A8]

The educational potential of ChatGPT and other generative artificial intelligence tools are significant. However, these systems can encourage students to produce assignments that appear authentic without requiring genuine understanding or learning. As a result, concerns have emerged regarding academic authorship, institutional trust, and the ethical balance between learning support and monitoring in the generative AI era. This creates a need for educational institutions to develop new approaches to maintain academic integrity and manage artificial intelligence use. 

This study reviews existing research on AI-based academic misconduct to evaluate how institutions are addressing these challenges through assessment design, AI detection systems, organizational policies, and staff development programs. The findings indicate that redesigning assessment systems is more effective than relying on automated detection tools. The effectiveness of AI-generated text decreases when assessments require critical thinking, contextual application, and original problem-solving. Requiring students to demonstrate their work across multiple stages, such as drafts, reflections, and evidence of the learning process, further limits misuse.

AI detection tools are limited by reliability issues and false positives, which can undermine trust and raise concerns about fairness and due process. These systems require human oversight and should be implemented cautiously. The study identifies three key elements of effective AI governance: clear guidance on acceptable AI use and disclosure requirements, training programs for students and educators, and improved assessment practices that prioritize the evaluation of authentic student learning.

RELATED ABSTRACTS


Measuring the AI-Fraud "Arms Race": Evidence from U.S. Fraud Reporting and Regulator Actions, 2019-2024
Presenter: Kyle Joseph Grosso
Faculty Sponsor: Zaur Rzakhanov
School: UMass Boston
Research Area: Artificial Intelligence
Location: Poster Session 2, 11:30 AM - 12:15 PM: Campus Center Auditorium [A47]

The rapid development of generative artificial intelligence (AI) in finance and accounting has changed how financial fraud is committed and detected. This descriptive, non-causal study examines annual data from the Federal Trade Commission’s Consumer Sentinel Network Data Books on imposter scam reports and reported losses and the FBI’s Internet Crime Complaint Center (IC3) Internet Crime Reports on business email compromise (BEC) complaints and adjusted losses from the period 2019 - 2024. Each series is summarized with trend figures and year-over-year growth calculations. Reported imposter‑scam losses rise from $0.67 billion in 2019 to $2.95 billion in 2024, while imposter‑scam reports increase from 647,472 to 845,806, with some fluctuation. BEC adjusted losses increase from $1.78 billion in 2019 to a peak of $2.95 billion in 2023, then decline to $2.77 billion in 2024, while BEC complaints average near 20,000 each year. To compare fraud trends alongside regulatory responses, a dated regulatory timeline consisting of notable announcements, alerts, and enforcement actions regarding AI-enhanced financial crime during this time period is compiled, and these events are placed alongside each series. This comparison establishes whether changes in financial fraud align with changes in regulatory activity. As an extension, a monthly series will be created to examine trends from the Consumer Financial Protection Bureau in complaint narratives which include AI-related terms during this period, with time trend regression models using ChatGPT’s first release: November 2022, as an intervention point. This thesis is designed to help define an evolving “arms race” between regulators and fraudsters.


Adversarial and Compositional Benchmarking of Visual and Linguistic Grounding in Vision-Language-Action Models
Presenter: Jeffrey Jiawen Deng
Faculty Sponsor: Abhidip Bhattacharyya
School: UMass Amherst
Research Area: Artificial Intelligence
Location: Poster Session 4, 2:15 PM - 3:00 PM: Room 163 [C30]

Vision-Language-Action models integrate natural language understanding, visual perception, and robotic control in order to solve complex, multi-modal, embodied Artificial Intelligence tasks and have achieved remarkable progress due to the availability of large-scale data,  advancements in transformer-based multi-modal representation learning, and imitation-learning policy training pipelines. Recent work, however, indicates that these models can be brittle and rely on superficial pixel correlations rather than robust semantic grounding. We investigate the Compositional Generalization Gap in VLA models by systematically testing their visual and linguistic understanding within a robotic simulation environment. The methodology utilizes the LIBERO simulation suite to evaluate open-source models like OpenVLA and SmolVLA, quantifying visual brittleness through high-throughput parallelized rendering of visual perturbations (e.g. lighting intensity, camera viewpoint shifts, and texture randomization) and assessing language neglect through adversarial linguistic instructions (e.g. semantic rephrasing). We apply an optimization algorithm to automatically determine the worst case adversarial scenarios in which visual and linguistic noise are combined in order to define a detailed taxonomy of failure modes and a quantitative measurement of performance degradation under compositional noise. We highlight critical safety gaps in current embodied AI architectures, moving the field toward more robust, general-purpose robotic agents.


Learning Intuitive Physics from Video to Improve Reinforcement Learning
Presenter: Shhreya Anand
Faculty Sponsor: Bruno C. da Silva
School: UMass Amherst
Research Area: Artificial Intelligence
Location: Poster Session 4, 2:15 PM - 3:00 PM: Room 163 [C31]

Reinforcement learning (RL) agents trained directly from pixels remain sample-inefficient and brittle, particularly in environments governed by physical dynamics. Because perception and control are learned simultaneously from reward signals, agents must rediscover basic physical regularities-such as object permanence and motion continuity-through costly trial-and-error. In contrast, humans acquire intuitive physics through observation before engaging in goal-directed behavior.

This project investigates whether self-supervised predictive world models can provide RL agents with such priors. Specifically, I evaluate Video Joint Embedding Predictive Architectures (V-JEPA) models pretrained on large-scale video to learn latent physical dynamics, as representation learners for control. In this work, I use Joint Embedding Predictive Architectures (JEPA) as pretrained representation learners for reinforcement learning. By learning physical dynamics from video through self-supervised prediction, we aim to separate perception from control and provide RL agents with intuitive physics priors before reward-based training.

I systematically compare three conditions using PPO-based agents: (1) a baseline CNN trained end-to-end from scratch, (2) a frozen pretrained V-JEPA encoder with a learned policy head, and (3) a fine-tuned V-JEPA encoder jointly optimized with RL. Experiments progress from CartPole to partially observable variants with occlusion, and finally to a robotic cube-pushing task involving real physical interactions.

By testing whether video-derived physics knowledge transfers to embodied control, this work provides a foundational evaluation of predictive world modeling as a scalable precursor to reinforcement learning.


Smart Travel App
Presenter: Trevor Kkaaya
Faculty Sponsor: Bo Jin Hatfield
School: Salem State University
Research Area: Artificial Intelligence
Location: Poster Session 4, 2:15 PM - 3:00 PM: Room 163 [C32]

Modern travel planning requires users to manually search across multiple websites, mapping tools, and review platforms, resulting in incomplete workflows and inefficient itinerary design. The purpose of this project is to develop and evaluate a unified, AI-powered platform that streamlines travel discovery, itinerary generation, and trip documentation within a single interface. By combining these processes, the system aims to reduce planning frustration while maintaining personalization and user control.The platform integrates a large language model API (Groq) to generate structured, day-by-day itineraries based on user-defined inputs such as destination, duration, pace, budget, and personal interests. Natural language place search is supported through external geospatial APIs, enabling users to query locations conversationally (e.g., “local coffee shops in Paris”). Additional features include drag-and-drop itinerary editing, collaborative trip sharing, favoriting and pinning of locations, and a digital travel diary for storing notes and photos.The system was implemented using a Next.js frontend, a Fastify backend API, Google OAuth authentication, and Supabase for secure data persistence. By combining AI-assisted content generation with structured user interaction design, the project demonstrates how large language models can be integrated into scalable web architectures to support intelligent, user-centered planning systems. This work offers a reusable model for AI-augmented workflow design in consumer applications.

RELATED ABSTRACTS