Query by committee

H. S. Seung*, M. Opper, H. Sompolinsky

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1349 Scopus citations

Abstract

We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.

Original languageEnglish
Title of host publicationProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory
PublisherPubl by ACM
Pages287-294
Number of pages8
ISBN (Print)089791497X, 9780897914970
DOIs
StatePublished - 1992
EventProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory - Pittsburgh, PA, USA
Duration: 27 Jul 199229 Jul 1992

Publication series

NameProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory

Conference

ConferenceProceedings of the Fifth Annual ACM Workshop on Computational Learning Theory
CityPittsburgh, PA, USA
Period27/07/9229/07/92

Fingerprint

Dive into the research topics of 'Query by committee'. Together they form a unique fingerprint.

Cite this