Efficient Convex Optimization in Fair Machine Learning

Presenter
Shreyas Waghe
Campus
UMass Amherst
Sponsor
Yair Zick, Department of Computer Science, UMass Amherst
Schedule
Session 2, 11:30 AM - 12:15 PM [Schedule by Time][Poster Grid for Time/Location]
Location
Poster Board A48, Campus Center Auditorium, Row 3 (A41-A60) [Poster Location Map]
Abstract

In classical machine learning, supervised learning tasks are naturally posed as minimization of disagreement between a model’s predictions and ground truth observations. This measure of mismatch, termed risk, is central to the parameter estimation (training) of machine learning models. 

In group-fair machine learning, datasets naturally contain protected population groups such as race or gender groups whose interests may conflict. In the interest of fairness, we should require that a single model have equitable performance on all comprised populations. By fairness-motivated composite aggregators called 'Malfare operators', this multi-objective per-group risk minimization task converts to a single-objective aggregate risk minimization problem. 

Popular convex optimization algorithms in machine learning, e.g. gradient descent, are designed to exploit the functional properties of their minimization objectives, such as strong convexity, smoothness and continuity. Their efficiency, convergence guarantees and design depend on a priori knowledge of these parameters. However, in the fair machine-learning setting, Malfare preserves neither strong convexity nor smoothness of its atomic risk functions yielding poor convergence rates.

In this project, we consider an alternate approach where the iterative local approximation of Malfare objectives is centre-stage to solving an ill-behaved optimization task. A step-by-step construction and minimization of these approximations leads to progress on optimizing the true Malfare objective. In turn, convergence guarantees depend on the properties of the atomic per-group risk functions. We analytically prove the properties of Malfare operators, devise novel optimization methods, and experimentally study the accelerated convergence of our algorithms.


Keywords
Convex Optimization, Fair Machine Learning, Algorithms, Machine Learning, Parameter Estimation
Research Area
Probability, Statistics, and Machine Learning

SIMILAR ABSTRACTS (BY KEYWORD)

Research Area Presenter Title Keywords
Probability, Statistics, and Machine Learning Mitagar, Anish Convex Optimization (1.0), Fair Algorithms (0.857143)
Computer Science Sejdi, Anxhela Machine Learning
Computer Science Jung, Hayun Machine Learning
Computer Science Baron, Cameron Shea Machine Learning
Artificial Intelligence Landaverde, Yeilin M. Machine Learning