Presenter: Amanda Sherman
Faculty Sponsor: Pubali Datta
School: UMass Amherst
Research Area: Cybersecurity
ABSTRACT
Identity and Access Management (IAM) policies govern which actions serverless applications can perform on protected cloud resources and services. Misconfigurations in these policies frequently result in either overprivileged or underprivileged access rights. Overprivileged policies expand the attack surface and increase the risk of unauthorized access and security breaches, while underprivileged policies restrict essential application functionality. Achieving an optimal policy that balances security with functionality remains a persistent challenge for developers, who must navigate dense and complex documentation, work under tight development deadlines, and operate with varying levels of security expertise.
In this study, we examine how well Large Language Models (LLMs) can automatically create least-privilege IAM policies for serverless applications. We built a framework to test the accuracy and security of LLM-generated policies across different serverless workloads. By comparing these policies to those written by developers, we measured how much privilege escalation and over-permissioning occurred in both LLM-generated and human-written policies. Our results highlight important trade-offs between security and functionality in AI-assisted policy generation, point out common vulnerabilities in automated policy creation, and offer practical advice for using LLMs to strengthen IAM security in serverless environments.