A learning theory approach to noninteractive database privacy

Avrim Blum, Katrina Ligett, Aaron Roth

Research output: Contribution to journalArticlepeer-review

174 Scopus citations


In this article, we demonstrate that, ignoring computational constraints, it is possible to release synthetic databases that are useful for accurately answering large classes of queries while preserving differential privacy. Specifically, we give a mechanism that privately releases synthetic data useful for answering a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows at most logarithmically with the size of the query class. We also show that it is not possible to release even simple classes of queries (such as intervals and their generalizations) over continuous domains with worst-case utility guarantees while preserving differential privacy. In response to this, we consider a relaxation of the utility guarantee and give a privacy preserving polynomial time algorithm that for any halfspace query will provide an answer that is accurate for some small perturbation of the query. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension over discrete domains.

Original languageAmerican English
Article number12
JournalJournal of the ACM
Issue number2
StatePublished - Apr 2013
Externally publishedYes


  • Learning theory
  • Noninteractive database privacy


Dive into the research topics of 'A learning theory approach to noninteractive database privacy'. Together they form a unique fingerprint.

Cite this