Unsupervised ensemble learning with dependent classifiers

Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger

Research output: Contribution to conferencePaperpeer-review

23 Scopus citations


In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it. The task is to combine these possibly conflicting predictions into an accurate meta-learner. Most works to date assumed perfect diversity between the different sources, a property known as conditional independence. In realistic scenarios, however, this assumption is often violated, and ensemble learners based on it can be severely sub-optimal. The key challenges we address in this paper are: (i) how to detect, in an unsupervised manner, strong violations of conditional independence; and (ii) construct a suitable meta-learner. To this end we introduce a statistical model that allows for dependencies between classifiers. Based on this model, we develop novel unsupervised methods to detect strongly dependent classifiers, better estimate their accuracies, and construct an improved meta-learner. Using both artificial and real datasets, we showcase the importance of taking classifier dependencies into account and the competitive performance of our approach.

Original languageAmerican English
Number of pages10
StatePublished - 2016
Externally publishedYes
Event19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain
Duration: 9 May 201611 May 2016


Conference19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016

Bibliographical note

Publisher Copyright:
© 2016 PMLR. All rights reserved.


Dive into the research topics of 'Unsupervised ensemble learning with dependent classifiers'. Together they form a unique fingerprint.

Cite this