Fair or Fail? Formal Methods for Evaluating ML Fairness

Presenter: Shriya Gautam

Faculty Sponsor: Heather Conboy

School: UMass Amherst

Research Area: Computer Science

Session: Poster Session 3, 1:15 PM - 2:00 PM, 163, C23

ABSTRACT

As machine learning and software has become increasingly integrated in the lives of the average person, ensuring that this kind of algorithmic decision-making is unbiased and fair to all groups is no longer just a technical preference, but also an ethical necessity. The potential ramifications of using unfair or biased algorithms in real world use cases can be disastrous, which is why having a formal system for verification and evaluation is more important than ever. This study aims to explore the landscape of various Machine Learning fairness frameworks and constraints as they apply to various software or Machine Learning examples, with a focus on ensuring equitable outcomes across diverse demographics. Through the planned activities, this study will produce a comprehensive overview of the methods by which fairness can be measured and guaranteed.

To accomplish this, this study will review several existing fairness evaluation frameworks such as the Seldonian framework, and explore other measures that are currently used to track fairness and bias across demographic groups. The study will then culminate in the replication of key algorithmic experiments, and the application of these fairness-aware methods to new software use cases. The resulting overview will provide a roadmap for measuring, auditing, and enforcing fairness in modern software systems.


RELATED ABSTRACTS