An information maximization approach to overcomplete and recurrent representations

Oren Shriki, Haim Sompolinsky, Daniel D. Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 13 - Proceedings of the 2000 Conference, NIPS 2000
PublisherNeural information processing systems foundation
ISBN (Print)0262122413, 9780262122412
StatePublished - 2001
Event14th Annual Neural Information Processing Systems Conference, NIPS 2000 - Denver, CO, United States
Duration: 27 Nov 20002 Dec 2000

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258

Conference

Conference14th Annual Neural Information Processing Systems Conference, NIPS 2000
Country/TerritoryUnited States
CityDenver, CO
Period27/11/002/12/00

Fingerprint

Dive into the research topics of 'An information maximization approach to overcomplete and recurrent representations'. Together they form a unique fingerprint.

Cite this