Abstract
We present a framework that allows to certify the fairness degree of a model based on an interactive and privacy-preserving test. The framework verifies any trained model, regardless of its training process and architecture. Thus, it allows us to evaluate any deep learning model on multiple fairness definitions empirically. We tackle two scenarios, where either the test data is privately available only to the tester or is publicly known in advance, even to the model creator. We investigate the soundness of the proposed approach using theoretical analysis and present statistical guarantees for the interactive test. Finally, we provide a cryptographic technique to automate fairness testing and certified inference with only black-box access to the model at hand while hiding the participants' sensitive data.
Original language | English |
---|---|
Title of host publication | AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society |
Publisher | Association for Computing Machinery, Inc |
Pages | 926-935 |
Number of pages | 10 |
ISBN (Electronic) | 9781450384735 |
DOIs | |
State | Published - 21 Jul 2021 |
Event | 4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021 - Virtual, Online, United States Duration: 19 May 2021 → 21 May 2021 |
Publication series
Name | AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society |
---|
Conference
Conference | 4th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 19/05/21 → 21/05/21 |
Bibliographical note
Publisher Copyright:© 2021 ACM.
Keywords
- cryptography
- fairness
- machine-learning
- privacy