The discussion of the advancement of technology in the field of medicine has almost been a constant headline during the twenty-first century. The rise of Artificial Intelligence (AI) in recent years has created opportunities for its use in medical spaces while other technologies like Augmented Reality (AR) are used to simulate real-life patient-provider scenarios. The implementation of these technologies has given rise to a myriad of questions often about the implications of these devices. Secondary data analysis was employed in this research to determine the possible outcomes of the applications of AI technologies specifically in the field of anesthesia. The common opinion among medical professionals on the employment of AI in the field seems to be that it will maximize efficient patient care, create more opportunities for personalized care, and possibly increase patient safety through the perioperative process. However, this research seeks to address the issues of designer bias, user bias and issues surrounding patient privacy, HIPAA laws, when AI technology is implemented. It also seeks to address how the role of nurses and doctors in anesthesia change with the increased use of these technologies. This research aimed to find the intersection between these ideas and understand the possible repercussions, as well as the possible benefits not just to one sub-specialty in anesthesia, but to the entire field.
This study explores how real HR managers use, or intentionally avoid using, AI in their day‑to‑day work. Through open, candid conversations with mid‑ to senior‑level HR professionals, the research examines both the benefits and the challenges of bringing AI into a field that is fundamentally human-centered. While AI can streamline tasks such as recruitment, performance management, and decision‑making, it also raises concerns related to trust, ethics, emotional impact, and whether organizations are truly ready for this technology. In addition to interviews, the study incorporates email-based communication to gather supplemental insights and clarify participants’ experiences, creating a mixed‑method approach that captures perspectives across different communication styles and comfort levels. Using semi‑structured interviews and thematic analysis guided by a prompt‑driven AI adoption model, the study investigates how factors such as task load, AI literacy, and organizational culture shape adoption decisions. It also considers moderating influences, including task complexity and HR expertise, that affect how AI is perceived and used. Ultimately, the goal is to understand how hybrid AI‑human systems can balance efficiency with empathy, preserve fairness and meaningful work, and influence broader outcomes like retention and organizational culture. This qualitative case study design, supported by purposive sampling across diverse organizational settings, allows for in‑depth, nuanced insights into how and why HR managers adopt or avoid generative AI in sensitive interpersonal tasks while addressing limitations in generalizability.
RELATED ABSTRACTS
As AI chatbots become a part of the academic environment, undergraduate students have been increasingly relying on them for research support, a better understanding of concepts, or even problem-solving. However, their popularity does not ensure the accuracy and usefulness of AI responses, specifically when prompts are vague and do not guide the Large Language Model to the correct outcomes. While there are several articles that support the importance of prompt engineering for guiding LLMs, they have not been supported by empirical evidence regarding the use of AI among undergraduate students. This study’s objective is to determine whether structured and engineered prompts lead to better academic outcomes in terms of accuracy and usefulness compared to basic, unstructured prompts. Additionally, the goal is to understand how prompt engineering affects cognitive load and perceived clarity of undergraduate students. The research will use an experimental design supported by survey instruments in which participants evaluate and compare two types of prompts: unstructured prompts and engineered prompts that include clear instructions and specific keywords. Both prompt types will be used to complete the same academic tasks, and the resulting chatbot responses will be assessed for accuracy, clarity, and usefulness. The study expects to collect approximately 70 participant responses from undergraduate students. The findings may help universities develop more effective AI literacy training, encourage responsible use of chatbots among students, and guide organizations developing AI systems on improving prompt design.
The educational potential of ChatGPT and other generative artificial intelligence tools are significant. However, these systems can encourage students to produce assignments that appear authentic without requiring genuine understanding or learning. As a result, concerns have emerged regarding academic authorship, institutional trust, and the ethical balance between learning support and monitoring in the generative AI era. This creates a need for educational institutions to develop new approaches to maintain academic integrity and manage artificial intelligence use.
This study reviews existing research on AI-based academic misconduct to evaluate how institutions are addressing these challenges through assessment design, AI detection systems, organizational policies, and staff development programs. The findings indicate that redesigning assessment systems is more effective than relying on automated detection tools. The effectiveness of AI-generated text decreases when assessments require critical thinking, contextual application, and original problem-solving. Requiring students to demonstrate their work across multiple stages, such as drafts, reflections, and evidence of the learning process, further limits misuse.
AI detection tools are limited by reliability issues and false positives, which can undermine trust and raise concerns about fairness and due process. These systems require human oversight and should be implemented cautiously. The study identifies three key elements of effective AI governance: clear guidance on acceptable AI use and disclosure requirements, training programs for students and educators, and improved assessment practices that prioritize the evaluation of authentic student learning.
RELATED ABSTRACTS
The rapid development of generative artificial intelligence (AI) in finance and accounting has changed how financial fraud is committed and detected. This descriptive, non-causal study examines annual data from the Federal Trade Commission’s Consumer Sentinel Network Data Books on imposter scam reports and reported losses and the FBI’s Internet Crime Complaint Center (IC3) Internet Crime Reports on business email compromise (BEC) complaints and adjusted losses from the period 2019 - 2024. Each series is summarized with trend figures and year-over-year growth calculations. Reported imposter‑scam losses rise from $0.67 billion in 2019 to $2.95 billion in 2024, while imposter‑scam reports increase from 647,472 to 845,806, with some fluctuation. BEC adjusted losses increase from $1.78 billion in 2019 to a peak of $2.95 billion in 2023, then decline to $2.77 billion in 2024, while BEC complaints average near 20,000 each year. To compare fraud trends alongside regulatory responses, a dated regulatory timeline consisting of notable announcements, alerts, and enforcement actions regarding AI-enhanced financial crime during this time period is compiled, and these events are placed alongside each series. This comparison establishes whether changes in financial fraud align with changes in regulatory activity. As an extension, a monthly series will be created to examine trends from the Consumer Financial Protection Bureau in complaint narratives which include AI-related terms during this period, with time trend regression models using ChatGPT’s first release: November 2022, as an intervention point. This thesis is designed to help define an evolving “arms race” between regulators and fraudsters.
Vision-Language-Action models integrate natural language understanding, visual perception, and robotic control in order to solve complex, multi-modal, embodied Artificial Intelligence tasks and have achieved remarkable progress due to the availability of large-scale data, advancements in transformer-based multi-modal representation learning, and imitation-learning policy training pipelines. Recent work, however, indicates that these models can be brittle and rely on superficial pixel correlations rather than robust semantic grounding. We investigate the Compositional Generalization Gap in VLA models by systematically testing their visual and linguistic understanding within a robotic simulation environment. The methodology utilizes the LIBERO simulation suite to evaluate open-source models like OpenVLA and SmolVLA, quantifying visual brittleness through high-throughput parallelized rendering of visual perturbations (e.g. lighting intensity, camera viewpoint shifts, and texture randomization) and assessing language neglect through adversarial linguistic instructions (e.g. semantic rephrasing). We apply an optimization algorithm to automatically determine the worst case adversarial scenarios in which visual and linguistic noise are combined in order to define a detailed taxonomy of failure modes and a quantitative measurement of performance degradation under compositional noise. We highlight critical safety gaps in current embodied AI architectures, moving the field toward more robust, general-purpose robotic agents.
Reinforcement learning (RL) agents trained directly from pixels remain sample-inefficient and brittle, particularly in environments governed by physical dynamics. Because perception and control are learned simultaneously from reward signals, agents must rediscover basic physical regularities-such as object permanence and motion continuity-through costly trial-and-error. In contrast, humans acquire intuitive physics through observation before engaging in goal-directed behavior.
This project investigates whether self-supervised predictive world models can provide RL agents with such priors. Specifically, I evaluate Video Joint Embedding Predictive Architectures (V-JEPA) models pretrained on large-scale video to learn latent physical dynamics, as representation learners for control. In this work, I use Joint Embedding Predictive Architectures (JEPA) as pretrained representation learners for reinforcement learning. By learning physical dynamics from video through self-supervised prediction, we aim to separate perception from control and provide RL agents with intuitive physics priors before reward-based training.
I systematically compare three conditions using PPO-based agents: (1) a baseline CNN trained end-to-end from scratch, (2) a frozen pretrained V-JEPA encoder with a learned policy head, and (3) a fine-tuned V-JEPA encoder jointly optimized with RL. Experiments progress from CartPole to partially observable variants with occlusion, and finally to a robotic cube-pushing task involving real physical interactions.
By testing whether video-derived physics knowledge transfers to embodied control, this work provides a foundational evaluation of predictive world modeling as a scalable precursor to reinforcement learning.
RELATED ABSTRACTS