Proving unfairness of decision making systems without model access

Yehezkel S. Resheff*, Yair Horesh, Moni Shahar

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The problem of guaranteeing the fairness of automatic decision making systems has become a topic of considerable interest. Many competing definitions of fairness have been proposed, as well as methods aiming to achieve or approximate them while maintaining the ability to train useful models. The complimentary question of testing the fairness of an existing predictor is important both to the creators of machine learning systems, and to users. More specifically, it is important for users to be able to prove that an unfair system that affects them is indeed unfair, even when full and direct access to the system internals is denied. In this paper, we propose a framework that enables us to prove the unfairness of predictors which have known accuracy properties, without direct access to the model, the features it is based on, or even individual predictions. To do so, we analyze the fairness-accuracy trade-off under the definition of demographic parity. We develop an information-theoretic method that uses only an external dataset containing the protected attributes and the targets and provides a bound on the accuracy of any fair model that predicts the same targets, regardless of the features it is based on. The result is an algorithm that enables proof of unfairness, with absolutely no cooperation from the system owners.

Original languageEnglish
Article number118987
JournalExpert Systems with Applications
Volume213
DOIs
StatePublished - 1 Mar 2023

Bibliographical note

Publisher Copyright:
© 2022 Elsevier Ltd

Keywords

  • Fairness
  • Information theory
  • Machine learning

Fingerprint

Dive into the research topics of 'Proving unfairness of decision making systems without model access'. Together they form a unique fingerprint.

Cite this