Strategyproof classification under constant hypotheses: A tale of two functions

Reshef Meir*, Ariel D. Procaccia, Jeffrey S. Rosenschein

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

We consider the following setting: a decision maker must make a decision based on reported data points with binary labels. Subsets of data points are controlled by different selfish agents, which might misreport the labels in order to sway the decision in their favor. We design mechanisms (both deterministic and randomized) that reach an approximately optimal decision and are strategyproof, i.e., agents are best off when they tell the truth. We then recast our results into a classical machine learning classification framework, where the decision maker must make a decision (choose between the constant positive hypothesis and the constant negative hypothesis) based only on a sampled subset of the agents' points.

Original languageAmerican English
Title of host publicationAAAI-08/IAAI-08 Proceedings - 23rd AAAI Conference on Artificial Intelligence and the 20th Innovative Applications of Artificial Intelligence Conference
Pages126-131
Number of pages6
StatePublished - 2008
Event23rd AAAI Conference on Artificial Intelligence and the 20th Innovative Applications of Artificial Intelligence Conference, AAAI-08/IAAI-08 - Chicago, IL, United States
Duration: 13 Jul 200817 Jul 2008

Publication series

NameProceedings of the National Conference on Artificial Intelligence
Volume1

Conference

Conference23rd AAAI Conference on Artificial Intelligence and the 20th Innovative Applications of Artificial Intelligence Conference, AAAI-08/IAAI-08
Country/TerritoryUnited States
CityChicago, IL
Period13/07/0817/07/08

Fingerprint

Dive into the research topics of 'Strategyproof classification under constant hypotheses: A tale of two functions'. Together they form a unique fingerprint.

Cite this