In the strategypro of classification setting, a set of labeled examples is partitioned among multiple agents. Given the reported labels, an optimal classification mechanism returns a classifier that minimizes the number of mislabeled examples. However, each agent is interested in the accuracy of the returned classifier on its own examples, and may mis-report its labels in order to achieve a better classifier, thus contaminating the dataset. The goal is to design strategypro of mechanisms that correctly label as many examples as possible. Previous work has investigated the foregoing setting under limiting assumptions, or with respect to very restricted classes of classifiers. In this paper, we study the strategypro of classification setting with respect to prominent classes of classifiers - boolean conjunctions and linear separators - and without any assumptions on the input. On the negative side, we show that, strategypro of mechanisms cannot achieve a constant approximation ratio, by showing that such mechanisms must be dictatorial on a subdomain, in the sense that the outcome is selected according to the preferences of a single agent. On the positive side, we present a randomized mechanism - Iterative Random Dictator - and demonstrate both that it is strategypro of and that its approximation ratio does not increase with the number of agents. Interestingly, the notion of dictatorship is prominently featured in all our results, helping to establish both upper and lower bounds.