Learning from examples in a single-layer neural network

Research output: Contribution to journalArticlepeer-review

47 Scopus citations

Abstract

Learning from examples to classify inputs according to their Hamming distance from a set of prototypes, in a single-layer network, is studied analytically. Using a statistical mechanical analysis, we calculate the average error, E, made by the system in classifying novel inputs, as a function of the number of learnt examples. The importance of introducing errors in the learning of the examples is demonstrated. When the number, P, of learnt examples is large, E decreases as a power law in lip, reflecting the absence of a gap in the spectrum of E.

Original languageEnglish
Pages (from-to)687-692
Number of pages6
JournalLettere Al Nuovo Cimento
Volume11
Issue number7
DOIs
StatePublished - 1 Apr 1990
Externally publishedYes

Fingerprint

Dive into the research topics of 'Learning from examples in a single-layer neural network'. Together they form a unique fingerprint.

Cite this