TY - JOUR
T1 - Learning and retrieval in attractor neural networks above saturation
AU - Griniasty, M.
AU - Gutfreund, H.
PY - 1991/2/7
Y1 - 1991/2/7
N2 - Learning in the context of attractor neural networks means finding a synaptic matrix J, for which a certain set of configurations are fixed points of the network dynamics. This is achieved by a number of learning algorithms designed to satisfy certain constraints. This process can be formulated as gradient descent dynamics to the ground state of an energy function, corresponding to a specific algortihm. We investigate neural networks in the range of parameters when the ground-state energy is positive; namely, when a synaptic matrix which satisfies all the desired constraints cannot be found by the learning algorithm. In particular, we calculate the typical distribution functions of local stabilities obtained for a number of algorithms in this region. These functions are used to investigate the retrieval properties as reflected by the size of the basins of attraction. This is done analytically in sparsely connected networks, and numerically in fully connected networks. The main conclusion of this paper is that the retrieval behaviour of attractor neural networks can be improved by learning above saturation.
AB - Learning in the context of attractor neural networks means finding a synaptic matrix J, for which a certain set of configurations are fixed points of the network dynamics. This is achieved by a number of learning algorithms designed to satisfy certain constraints. This process can be formulated as gradient descent dynamics to the ground state of an energy function, corresponding to a specific algortihm. We investigate neural networks in the range of parameters when the ground-state energy is positive; namely, when a synaptic matrix which satisfies all the desired constraints cannot be found by the learning algorithm. In particular, we calculate the typical distribution functions of local stabilities obtained for a number of algorithms in this region. These functions are used to investigate the retrieval properties as reflected by the size of the basins of attraction. This is done analytically in sparsely connected networks, and numerically in fully connected networks. The main conclusion of this paper is that the retrieval behaviour of attractor neural networks can be improved by learning above saturation.
UR - http://www.scopus.com/inward/record.url?scp=36149033282&partnerID=8YFLogxK
U2 - 10.1088/0305-4470/24/3/030
DO - 10.1088/0305-4470/24/3/030
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:36149033282
SN - 0305-4470
VL - 24
SP - 715
EP - 734
JO - Journal of Physics A: Mathematical and General
JF - Journal of Physics A: Mathematical and General
IS - 3
ER -