Title: Individual fairness through robustness
Authors: Mikhail Yurochkin - University of Michigan and IBM Research (United States) [presenting]
Abstract: An approach is considered to train machine learning systems that are fair in the sense that their performance is invariant under certain perturbations to the features. For example, the performance of a resume screening system should be invariant under changes to the name of the applicant or switching the gender pronouns. We connect this intuitive notion of algorithmic fairness to individual fairness and study how to certify ML algorithms as algorithmically fair. We demonstrate the applicability of our framework to supervised learning of neural networks, gradient boosted decision trees and learning to rank problems. We also discuss extensions to the task of auditing ML systems for individual fairness violations. We demonstrate the effectiveness of our approaches on three machine learning tasks that are susceptible to gender and racial biases.