On-line Learning of Dichotomies

N. Barkai, H. S. Seung, H. Sompolinsky

Research output: Contribution to conferencePaperpeer-review

3 Scopus citations

Abstract

The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the number of examples P is equivalent to the learning time, since each example is presented only once. The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered. For a target that is a perceptron rule, the learning curve of the perceptron algorithm can decrease as fast as P-1, if the schedule is optimized. If the target is not realizable by a perceptron, the perceptron algorithm does not generally converge to the solution with lowest generalization error. For the case of unrealizability due to a simple output noise, we propose a new on-line algorithm for a perceptron yielding a learning curve that can approach the optimal generalization error as fast as P-1/2. We then generalize the perceptron algorithm to any class of thresholded smooth functions learning a target from that class.

Original languageEnglish
Pages303-310
Number of pages8
StatePublished - 1994
Event7th International Conference on Neural Information Processing Systems, NIPS 1994 - Denver, United States
Duration: 1 Jan 19941 Jan 1994

Conference

Conference7th International Conference on Neural Information Processing Systems, NIPS 1994
Country/TerritoryUnited States
CityDenver
Period1/01/941/01/94

Bibliographical note

Publisher Copyright:
© NIPS 1994.All rights reserved.

Fingerprint

Dive into the research topics of 'On-line Learning of Dichotomies'. Together they form a unique fingerprint.

Cite this