Cybersecurity
Wi‑Fi MythBusters: Campus Edition
Presenter: Chibuike Francis Onyeocha
Faculty Sponsor: Clarissa Codrington
School: Massachusetts Bay Community College
Research Area: Cybersecurity
Location: Poster Session 2, 11:30 AM - 12:15 PM: Campus Center Auditorium [A85]

College students face a growing problem, their heavy reliance on campus networks, learning management systems, personal devices, and mobile apps exposes them to frequent cyber threats phishing targeting student email and course portals, malware delivered through shared lab computers, attacks that exploit outdated software on laptops and phones, and data interception on unsecured dorm or cafe Wi‑Fi because many students lack consistent, campus‑specific security habits. This poster argues that the solution lies in simple, everyday cybersecurity practices tailored to student life like using strong, unique passwords and multi‑factor authentication for school accounts, keeping operating systems and campus‑issued software up to date, disabling auto‑connect and using VPNs on public or dorm networks, and applying phishing‑recognition skills to messages that impersonate professors, financial aid offices, or campus services. Drawing on scholarly research and campus best practices, the study shows that these low-effort behaviors directly block the most common attack vectors students encounter and that institutional support, orientation training, enforced patch management, secure campus Wi‑Fi, and accessible IT help amplify their effectiveness. When students consistently apply these basic, context-aware practices and institutions provide supportive systems, student vulnerability to common cyberattacks drops substantially; moreover, campuses benefit through reduced service disruptions, lower remediation costs, preserved academic integrity, and strengthened institutional reputation, supporting the thesis that cybersecurity on campus is both an individual responsibility and a shared institutional obligation. 


Project GAIA: An AI-Supported Governance Architecture for National Cybersecurity in Iraq
Presenter: Nile Al-Hamadany
Faculty Sponsor: Manish Wadhwa
School: Salem State University
Research Area: Cybersecurity
Location: Poster Session 2, 11:30 AM - 12:15 PM: Room 165 [D10]

As digital infrastructure becomes central to managing national systems and services, resilient cybersecurity has emerged as a foundational requirement for modern states. Iraq, a developing country, is undergoing a digital expansion to modernize its technological capabilities to align with international standards, yet faces significant cybersecurity challenges resulting from decades of political instability and underfunded infrastructure. These problems are further driven by fragmented governance, limited coordination between sectors, and outdated legal frameworks that are insufficient in tackling modern cybercrimes. With comparatively limited national cybersecurity capacity, Iraq serves as a critical case study in examining how governance deficiencies can create exposed cyber environments relative to regional peers. This paper analyzes these structural challenges and proposes Project GAIA (Governance Architecture for Information Assurance), a conceptual AI-supported governance framework designed to strengthen national coordination and oversight across critical sectors. The proposed framework integrates established AI Trust, Risk, and Security Management (TRiSM) principles and NIST standards to support structured and accountable decision-making under human supervision.



Evaluating Automated IAM Policy Generation for Serverless Cloud Applications
Presenter: Amanda Sherman
Faculty Sponsor: Pubali Datta
School: UMass Amherst
Research Area: Cybersecurity
Location: Poster Session 3, 1:15 PM - 2:00 PM: Campus Center Auditorium [A7]

Identity and Access Management (IAM) policies govern which actions serverless applications can perform on protected cloud resources and services. Misconfigurations in these policies frequently result in either overprivileged or underprivileged access rights. Overprivileged policies expand the attack surface and increase the risk of unauthorized access and security breaches, while underprivileged policies restrict essential application functionality. Achieving an optimal policy that balances security with functionality remains a persistent challenge for developers, who must navigate dense and complex documentation, work under tight development deadlines, and operate with varying levels of security expertise.

In this study, we examine how well Large Language Models (LLMs) can automatically create least-privilege IAM policies for serverless applications. We built a framework to test the accuracy and security of LLM-generated policies across different serverless workloads. By comparing these policies to those written by developers, we measured how much privilege escalation and over-permissioning occurred in both LLM-generated and human-written policies. Our results highlight important trade-offs between security and functionality in AI-assisted policy generation, point out common vulnerabilities in automated policy creation, and offer practical advice for using LLMs to strengthen IAM security in serverless environments. 


Cyber Passport
Presenter: Dylen Matovu
Faculty Sponsor: Devan Walton
School: Northern Essex Community College
Research Area: Cybersecurity
Location: Poster Session 3, 1:15 PM - 2:00 PM: Campus Center Auditorium [A48]

Facial recognition technology is now used routinely in criminal investigations, yet the algorithms driving these searches remain scientifically biased and legally unregulated. This project investigates the admissibility and constitutional implications of biometric digital identity evidence derived from such technologies, asking whether current evidentiary rules adequately protect defendants when the “witness” identifying them is a biased algorithm. Police frequently use social media photos as probe images, and NIST studies confirm that error rates for minority populations can be up to one hundred times higher than for white individuals. Compounding this scientific unreliability, law enforcement often conceals the use of facial recognition altogether, presenting algorithmic matches as definitive identifications rather than low-confidence leads a practice illustrated in cases like State v. Tolbert. Using doctrinal analysis of Brady, Daubert, and Fourth Amendment warrant requirements, alongside empirical review of NIST data on false positive rates and training set bias, this research argues that facial recognition technology fails to meet the reliability standards required for forensic evidence. Because these systems operate as proprietary black boxes, defendants cannot meaningfully cross-examine the process by which they were identified. The project concludes that without mandated disclosure of the five stages of a facial recognition search, the use of digital identity evidence violates due process. These findings carry broader significance as legislatures in New York and at the federal level consider restrictions on warrantless biometric surveillance; absent strict evidentiary rules or legislative bans, the current ease of use for police creates an unacceptable risk of false imprisonment.



Collusion Detection in Multi Agent Systems
Presenter: Fardeen Riaz Ahamed
Group Members: Leilani Karanja, Ryan Firdosh Kotwal, Tisya Singh
Faculty Sponsor: Eugene Bagdasarian
School: UMass Amherst
Research Area: Cybersecurity
Location: Poster Session 4, 2:15 PM - 3:00 PM: Concourse [B15]

Multi-agent systems (MAS) are increasingly deployed in safety and security-critical domains, including distributed cyber-defense, autonomous vehicle coordination, and large-scale decision-making systems. In such settings, the risk of agents creating a secret channel to collude greatly increases. Undetected collusion can lead to severe consequences, including data leakage, compromised coordination, and system-wide failures. Despite these risks, known collusion-detection mechanisms in multi-agent systems are extremely limited in their ability to detect collusion in secret channels.

This work proposes a novel, supervised, and domain-agnostic collusion-detection framework that leverages large language models deployed locally using vLLM. Agent interactions are processed by Qwen-based models, which analyze inter-agent communication patterns without access to internal agent states or predefined attack signatures. Agents compute behavioral heuristics, including response-time variance, communication frequency, contribution imbalance, and disagreement rates, which are provided as labeled inputs to the LLM for analysis. This enables real-time detection of anomalous or collusive behavior while remaining agnostic to specific collusion strategies.

The proposed approach is being evaluated using Colosseum, a modular multi-agent simulation environment for safety and security research, across various instances of collusion. Planned evaluation metrics include detection accuracy, false-positive rates, and detection latency. This work aims to demonstrate that lightweight, supervised anomaly detection can provide a scalable and generalizable defense mechanism for colluding multi-agent systems.


What Your Phone Knows About You: A Digital Forensics Perspective on Mobile App Data Collection
Presenter: Oleksandr Lysyi
Faculty Sponsor: Enping Li
School: Bridgewater State University
Research Area: Cybersecurity
Location: Poster Session 4, 2:15 PM - 3:00 PM: Concourse [B16]

People increasingly rely on mobile apps for many aspects of their daily lives. Most users take it for granted that privacy policies are designed to protect their personal data. However, how these policies actually protect users and to what extent their data is collected and used often remains unclear. The purpose of this study is to conduct a forensic analysis of popular social media applications on both Android and iOS platforms. The goal is to gain a systematic understanding of the types of user data that are collected, the locations where such data are stored, and how the data are protected or secured. The methodology includes conducting a survey of digital device forensic studies, related research findings, and technical analyses focusing on social media applications such as Instagram, TikTok, Snapchat, and WhatsApp. The technical forensic analysis will focus on comparisons between Android and iOS platforms, the types of data collected, such as messages, geolocation, contact information, and search queries, as well as data retention after deletion and potential discrepancies between privacy policies and documented practices. Based on our research and forensic analysis, we aim to  gain a better understanding of whether applications collect more personal data than disclosed in their privacy policies, the comparative levels of data protection on Android and iOS, and the recoverability and retention duration of deleted data across different applications and platforms.

RELATED ABSTRACTS