Estimating the accuracies of multiple classifiers without labeled data

Ariel Jaffe, Boaz Nadler, Yuval Kluger

Research output: Contribution to journalConference articlepeer-review

26 Scopus citations

Abstract

In various situations one is given only the predictions of multiple classifiers over a large un-labeled test data. This scenario raises the following questions: Without any labeled data and without any a-priori knowledge about the reliability of these different classifiers, is it possible to consistently and computationally efficiently estimate their accuracies? Furthermore, also in a completely unsupervised manner, can one construct a more accurate un-supervised ensemble classifier? In this paper, focusing on the binary case, we present simple, computationally efficient algorithms to solve these questions. Furthermore, under standard classifier independence assumptions, we prove our methods are consistent and study their asymptotic error. Our approach is spectral based on the fact that the off-diagonal entries of the classifiers' covari-ance matrix and 3-d tensor are rank-one. We illustrate the competitive performance of our algorithms via extensive experiments on both artificial and real datasets.

Original languageEnglish
Pages (from-to)407-415
Number of pages9
JournalJournal of Machine Learning Research
Volume38
StatePublished - 2015
Externally publishedYes
Event18th International Conference on Artificial Intelligence and Statistics, AISTATS 2015 - San Diego, United States
Duration: 9 May 201512 May 2015
Conference number: 18
https://proceedings.mlr.press/v38

Bibliographical note

Publisher Copyright:
Copyright 2015 by the authors.

Fingerprint

Dive into the research topics of 'Estimating the accuracies of multiple classifiers without labeled data'. Together they form a unique fingerprint.

Cite this