16654@AAAI

Total: 1

#1 Protecting the Protected Group: Circumventing Harmful Fairness [PDF] [Copy] [Kimi]

Authors: Omer Ben-Porat ; Fedor Sandomirskiy ; Moshe Tennenholtz

The recent literature on fair Machine Learning manifests that the choice of fairness constraints must be driven by the utilities of the population. However, virtually all previous work makes the unrealistic assumption that the exact underlying utilities of the population (representing private tastes of individuals) are known to the regulator that imposes the fairness constraint. In this paper we initiate the discussion of the \emph{mismatch}, the unavoidable difference between the underlying utilities of the population and the utilities assumed by the regulator. We demonstrate that the mismatch can make the disadvantaged protected group worse off after imposing the fairness constraint and provide tools to design fairness constraints that help the disadvantaged group despite the mismatch.