Truthful linear regression

Rachel Cummings, Stratis Ioannidis, Katrina Ligett

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains our design to mechanisms that provide a privacy guarantee to the participants; we use differential privacy to model individuals' privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. We overcome this challenge through an appropriate design of the computation and payment scheme.

Original languageEnglish
Pages (from-to)448-483
Number of pages36
JournalProceedings of Machine Learning Research
Volume40
StatePublished - 2015
Externally publishedYes
Event28th Conference on Learning Theory, COLT 2015 - Paris, France
Duration: 2 Jul 20156 Jul 2015

Bibliographical note

Publisher Copyright:
© 2015 A. Agarwal & S. Agarwal.

Keywords

  • Data privacy
  • Differential privacy
  • Linear regression
  • Mechanism design
  • Privacy

Fingerprint

Dive into the research topics of 'Truthful linear regression'. Together they form a unique fingerprint.

Cite this