Skip to main navigation Skip to search Skip to main content

Black-Box Access is Insufficient for Rigorous AI Audits

  • Stephen Casper
  • , Carson Ezell
  • , Charlotte Siegmann
  • , Noam Kolt
  • , Taylor Lynn Curtis
  • , Benjamin Bucknall
  • , Andreas Haupt
  • , Kevin Wei
  • , Jérémy Scheurer
  • , Marius Hobbhahn
  • , Lee Sharkey
  • , Satyapriya Krishna
  • , Marvin Von Hagen
  • , Silas Alberti
  • , Alan Chan
  • , Qinyi Sun
  • , Michael Gerovitch
  • , David Bau
  • , Max Tegmark
  • , David Krueger
  • Dylan Hadfield-Menell

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

47 Scopus citations

Abstract

External audits of AI systems are increasingly recognized as a key mechanism for AI governance. The effectiveness of an audit, however, depends on the degree of access granted to auditors. Recent audits of state-of-the-art AI systems have primarily relied on black-box access, in which auditors can only query the system and observe its outputs. However, white-box access to the system's inner workings (e.g., weights, activations, gradients) allows an auditor to perform stronger attacks, more thoroughly interpret models, and conduct fine-tuning. Meanwhile, outside-the-box access to training and deployment information (e.g., methodology, code, documentation, data, deployment details, findings from internal evaluations) allows auditors to scrutinize the development process and design more targeted evaluations. In this paper, we examine the limitations of black-box audits and the advantages of white- and outside-the-box audits. We also discuss technical, physical, and legal safeguards for performing these audits with minimal security risks. Given that different forms of access can lead to very different levels of evaluation, we conclude that (1) transparency regarding the access and methods used by auditors is necessary to properly interpret audit results, and (2) white- and outside-the-box access allow for substantially more scrutiny than black-box access alone.

Original languageEnglish
Title of host publication2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
PublisherAssociation for Computing Machinery, Inc
Pages2254-2272
Number of pages19
ISBN (Electronic)9798400704505
DOIs
StatePublished - 3 Jun 2024
Externally publishedYes
Event2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 - Rio de Janeiro, Brazil
Duration: 3 Jun 20246 Jun 2024

Publication series

Name2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024

Conference

Conference2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024
Country/TerritoryBrazil
CityRio de Janeiro
Period3/06/246/06/24

Bibliographical note

Publisher Copyright:
© 2024 Owner/Author.

Keywords

  • Adversarial Attacks
  • Auditing
  • Black-Box Access
  • Evaluation
  • Explainability
  • Fairness
  • Fine-Tuning
  • Governance
  • Interpretability
  • Policy
  • Regulation
  • Risk
  • White-Box Access

Fingerprint

Dive into the research topics of 'Black-Box Access is Insufficient for Rigorous AI Audits'. Together they form a unique fingerprint.

Cite this