Abstract
We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy. Incentivizing most players to truthfully report their data to the analyst constrains our design to mechanisms that provide a privacy guarantee to the participants; we use differential privacy to model individuals' privacy losses. This immediately poses a problem, as differentially private computation of a linear model necessarily produces a biased estimation, and existing approaches to design mechanisms to elicit data from privacy-sensitive individuals do not generalize well to biased estimators. We overcome this challenge through an appropriate design of the computation and payment scheme.
Original language | English |
---|---|
Pages (from-to) | 448-483 |
Number of pages | 36 |
Journal | Proceedings of Machine Learning Research |
Volume | 40 |
State | Published - 2015 |
Externally published | Yes |
Event | 28th Conference on Learning Theory, COLT 2015 - Paris, France Duration: 2 Jul 2015 → 6 Jul 2015 |
Bibliographical note
Publisher Copyright:© 2015 A. Agarwal & S. Agarwal.
Keywords
- Data privacy
- Differential privacy
- Linear regression
- Mechanism design
- Privacy