Abstract
Sufficiency and separation are two fundamental criteria in classification fairness. For binary classifiers, these concepts correspond to subgroup calibration and equalized odds, respectively, and are known to be incompatible except in trivial cases. In this work, we explore a relaxation of these criteria based on f-divergences between distributions – essentially the same relaxation studied in the literature on approximate multicalibration – analyze their relationships, and derive implications for fair representations and downstream uses (post-processing) of representations. We show that when a protected attribute is determinable from features present in the data, the (relaxed) criteria of sufficiency and separation exhibit a tradeoff, forming a convex Pareto frontier. Moreover, we prove that when a protected attribute is not fully encoded in the data, achieving full sufficiency may be impossible. This finding not only strengthens the case against “fairness through unawareness” but also highlights an important caveat for work on (multi-)calibration.
Original language | English |
---|---|
Title of host publication | 6th Symposium on Foundations of Responsible Computing, FORC 2025 |
Editors | Mark Bun |
Publisher | Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing |
ISBN (Electronic) | 9783959773676 |
DOIs | |
State | Published - 3 Jun 2025 |
Event | 6th Symposium on Foundations of Responsible Computing, FORC 2025 - Stanford, United States Duration: 4 Jun 2025 → 6 Jun 2025 |
Publication series
Name | Leibniz International Proceedings in Informatics, LIPIcs |
---|---|
Volume | 329 |
ISSN (Print) | 1868-8969 |
Conference
Conference | 6th Symposium on Foundations of Responsible Computing, FORC 2025 |
---|---|
Country/Territory | United States |
City | Stanford |
Period | 4/06/25 → 6/06/25 |
Bibliographical note
Publisher Copyright:© Etam Benger and Katrina Ligett.
Keywords
- Algorithmic fairness
- information theory
- sufficiency-separation tradeoff