Learning and retrieval in attractor neural networks above saturation

M. Griniasty, H. Gutfreund

Research output: Contribution to journalArticlepeer-review

51 Scopus citations

Abstract

Learning in the context of attractor neural networks means finding a synaptic matrix J, for which a certain set of configurations are fixed points of the network dynamics. This is achieved by a number of learning algorithms designed to satisfy certain constraints. This process can be formulated as gradient descent dynamics to the ground state of an energy function, corresponding to a specific algortihm. We investigate neural networks in the range of parameters when the ground-state energy is positive; namely, when a synaptic matrix which satisfies all the desired constraints cannot be found by the learning algorithm. In particular, we calculate the typical distribution functions of local stabilities obtained for a number of algorithms in this region. These functions are used to investigate the retrieval properties as reflected by the size of the basins of attraction. This is done analytically in sparsely connected networks, and numerically in fully connected networks. The main conclusion of this paper is that the retrieval behaviour of attractor neural networks can be improved by learning above saturation.

Original languageEnglish
Pages (from-to)715-734
Number of pages20
JournalJournal of Physics A: Mathematical and General
Volume24
Issue number3
DOIs
StatePublished - 7 Feb 1991

Fingerprint

Dive into the research topics of 'Learning and retrieval in attractor neural networks above saturation'. Together they form a unique fingerprint.

Cite this