TY - GEN
T1 - On the limits of dictatorial classification
AU - Meir, Reshef
AU - Procaccia, Ariel D.
AU - Rosenschein, Jeffrey S.
PY - 2010
Y1 - 2010
N2 - In the strategypro of classification setting, a set of labeled examples is partitioned among multiple agents. Given the reported labels, an optimal classification mechanism returns a classifier that minimizes the number of mislabeled examples. However, each agent is interested in the accuracy of the returned classifier on its own examples, and may mis-report its labels in order to achieve a better classifier, thus contaminating the dataset. The goal is to design strategypro of mechanisms that correctly label as many examples as possible. Previous work has investigated the foregoing setting under limiting assumptions, or with respect to very restricted classes of classifiers. In this paper, we study the strategypro of classification setting with respect to prominent classes of classifiers - boolean conjunctions and linear separators - and without any assumptions on the input. On the negative side, we show that, strategypro of mechanisms cannot achieve a constant approximation ratio, by showing that such mechanisms must be dictatorial on a subdomain, in the sense that the outcome is selected according to the preferences of a single agent. On the positive side, we present a randomized mechanism - Iterative Random Dictator - and demonstrate both that it is strategypro of and that its approximation ratio does not increase with the number of agents. Interestingly, the notion of dictatorship is prominently featured in all our results, helping to establish both upper and lower bounds.
AB - In the strategypro of classification setting, a set of labeled examples is partitioned among multiple agents. Given the reported labels, an optimal classification mechanism returns a classifier that minimizes the number of mislabeled examples. However, each agent is interested in the accuracy of the returned classifier on its own examples, and may mis-report its labels in order to achieve a better classifier, thus contaminating the dataset. The goal is to design strategypro of mechanisms that correctly label as many examples as possible. Previous work has investigated the foregoing setting under limiting assumptions, or with respect to very restricted classes of classifiers. In this paper, we study the strategypro of classification setting with respect to prominent classes of classifiers - boolean conjunctions and linear separators - and without any assumptions on the input. On the negative side, we show that, strategypro of mechanisms cannot achieve a constant approximation ratio, by showing that such mechanisms must be dictatorial on a subdomain, in the sense that the outcome is selected according to the preferences of a single agent. On the positive side, we present a randomized mechanism - Iterative Random Dictator - and demonstrate both that it is strategypro of and that its approximation ratio does not increase with the number of agents. Interestingly, the notion of dictatorship is prominently featured in all our results, helping to establish both upper and lower bounds.
KW - Classification
KW - Game theory
KW - Mechanism design
KW - Social choice
UR - http://www.scopus.com/inward/record.url?scp=82955182371&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:82955182371
SN - 9781617387715
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 609
EP - 616
BT - 9th International Joint Conference on Autonomous Agents and Multiagent Systems 2010, AAMAS 2010
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
T2 - 9th International Joint Conference on Autonomous Agents and Multiagent Systems 2010, AAMAS 2010
Y2 - 10 May 2010
ER -