Privately Learning Thresholds: Closing the Exponential Gap

Haim Kaplan*, Katrina Ligett, Yishay Mansour, Moni Naor, Uri Stemmer

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

31 Scopus citations

Abstract

We study the sample complexity of learning threshold functions under the constraint of differential privacy. It is assumed that each labeled example in the training data is the information of one individual and we would like to come up with a generalizing hypothesis h while guaranteeing differential privacy for the individuals. Intuitively, this means that any single labeled example in the training data should not have a significant effect on the choice of the hypothesis. This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size X (even for approximate differential privacy). Alon et al. (STOC 2019) showed a lower bound of Ω(log |X|) on the sample complexity and Bun et al. (FOCS 2015) presented an approximate-private learner with sample complexity Õ (2log∗ |X|). In this work we reduce this gap significantly, almost settling the sample complexity. We first present a new upper bound (algorithm) of Õ ((log |X|)2) on the sample complexity and then present an improved version with sample complexity Õ ((log |X|)1.5) . Our algorithm is constructed for the related interior point problem, where the goal is to find a point between the largest and smallest input elements. It is based on selecting an input-dependent hash function and using it to embed the database into a domain whose size is reduced logarithmically; this results in a new database, an interior point of which can be used to generate an interior point of the original database in a differentially private manner.

Original languageAmerican English
Pages (from-to)2263-2285
Number of pages23
JournalProceedings of Machine Learning Research
Volume125
StatePublished - 2020
Event33rd Conference on Learning Theory, COLT 2020 - Virtual, Online, Austria
Duration: 9 Jul 202012 Jul 2020

Bibliographical note

Funding Information:
∗ Tel Aviv University and Google Research. Partially supported by the Israel Science Foundation (grant 1595/19). † School of Computer Science and Engineering, Hebrew University of Jerusalem, Jerusalem 91904, Israel. NSF grants CNS-1254169 and CNS-1518941, US-Israel Binational Science Foundation grant 2012348, Israel Science Founda-tion (ISF) grant #1044/16, United States Air Force and DARPA under contract FA8750-16-C-0022, and the Feder-mann Cyber Security Center in conjunction with the Israel national cyber directorate. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. ‡ Tel Aviv University and Google Research. Supported in part by grant from the Israel Science Foundation § Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Supported in part by grant from the Israel Science Foundation (no. 950/16) and the US-Israel Binational Science Foundation grant 2012348. Incumbent of the Judith Kleeman Professorial Chair. ¶ Ben-Gurion University and Google Research. Partially supported by the Israel Science Foundation (grant 1871/19).

Publisher Copyright:
© 2020 H. Kaplan, K. Ligett, Y. Mansour, M. Naor & U. Stemmer.

Fingerprint

Dive into the research topics of 'Privately Learning Thresholds: Closing the Exponential Gap'. Together they form a unique fingerprint.

Cite this