Abstract
We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.
| Original language | English |
|---|---|
| Title of host publication | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 |
| Publisher | AAAI Press |
| Pages | 390-396 |
| Number of pages | 7 |
| ISBN (Electronic) | 9781577357605 |
| State | Published - 2016 |
| Event | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States Duration: 12 Feb 2016 → 17 Feb 2016 |
Publication series
| Name | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 |
|---|
Conference
| Conference | 30th AAAI Conference on Artificial Intelligence, AAAI 2016 |
|---|---|
| Country/Territory | United States |
| City | Phoenix |
| Period | 12/02/16 → 17/02/16 |
Bibliographical note
Publisher Copyright:© 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Fingerprint
Dive into the research topics of 'Strategyproof peer selection: Mechanisms, analyses, and experiments'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver