Learning from examples in large neural networks

  • H. Sompolinsky*
  • , N. Tishby
  • , H. S. Seung
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

119 Scopus citations

Abstract

A statistical-mechanical theory of learning from examples in layered networks at finite temperature is studied. When the training error is a smooth function of continuously varying weights, the generalization error falls off asymptotically as the inverse number of examples. By analytical and numerical studies of single-layer perceptrons, we show that when the weights are discrete, the generalization error can exhibit a discontinuous transition to perfect generalization. For intermediate sizes of the example set, the state of perfect generalization coexists with a metastable spin-glass state.

Original languageEnglish
Pages (from-to)1683-1686
Number of pages4
JournalPhysical Review Letters
Volume65
Issue number13
DOIs
StatePublished - 1990
Externally publishedYes

Fingerprint

Dive into the research topics of 'Learning from examples in large neural networks'. Together they form a unique fingerprint.

Cite this