Title: Fair and Diverse Data Representation in Machine Learning
Advisor: Dr. Mohit Singh, ISyE, Georgia Institute of Technology
Committee:
Dr. Rachel Cummings, ISyE, Georgia Institute of Technology
Dr. Aleksandar Nikolov, Computer Science, University of Toronto
Dr. Sebastian Pokutta, Institute of Mathematics, Technical University of Berlin
Dr. Santosh Vempala, College of Computing, Georgia Institute of Technology
Reader: Dr. Santosh Vempala, College of Computing, Georgia Institute of Technology
Summary: The work contains two major lines of research: subset selection and multi-criteria dimensionality reduction with an application to fairness. Subset selection can be applied to a classical problem of optimal design in statistics and many others in machine learning when learning is subject to a labelling budget constraint. This thesis also extends the arguably most commonly used dimensionality reduction technique, Principal Component Analysis (PCA), to satisfy a fairness criterion of choice. We model an additional fairness constraint as multi-criteria dimensionality reduction where we are given multiple objectives that need to be optimized simultaneously.
Our first contribution is to show that approximability of certain criteria for optimal design problems can be obtained by novel polynomial-time sampling algorithms, improving upon best previous approximation ratios in the literature. We also show that the A-optimal design problem is NP-hard to approximate within a fixed constant when k = d.
One of the most common heuristics used in practice to solve A and D-optimal design problems is the local search heuristic, also known as the Fedorov’s exchange method. This is due to its simplicity and its empirical performance. However, despite its wide usage, no theoretical bound has been proven for this algorithm. We bridge this gap and prove approximation guarantees for the local search algorithms for A- and D-optimal design problems.
Our model of multi-criteria dimensionality reduction captures several fairness criteria for dimensionality reduction such as the Fair-PCA problem introduced by Samadi et al. in 2018 and the Nash Social Welfare (NSW) problem. In the Fair-PCA problem, the input data is divided into k groups, and the goal is to find a single d-dimensional representation for all groups for which the maximum reconstruction error of any one group is minimized. In NSW, the goal is to maximize the product of the individual variances of the groups achieved by the common low-dimensional space. We develop algorithms for multi-criteria dimensionality reduction and show their theoretical performance and fast implementations in practice.