We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.
|Original language||American English|
|Title of host publication||30th AAAI Conference on Artificial Intelligence, AAAI 2016|
|Number of pages||7|
|State||Published - 2016|
|Event||30th AAAI Conference on Artificial Intelligence, AAAI 2016 - Phoenix, United States|
Duration: 12 Feb 2016 → 17 Feb 2016
|Name||30th AAAI Conference on Artificial Intelligence, AAAI 2016|
|Conference||30th AAAI Conference on Artificial Intelligence, AAAI 2016|
|Period||12/02/16 → 17/02/16|
Bibliographical noteFunding Information:
Data61 (formerly known as NICTA) is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. This research has also been partly funded by Microsoft Research through its PhD Scholarship Program, and by Israel Science Foundation grant #1227/12. This work has also been partly supported by COST Action IC1205 on Computational Social Choice.
© 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.