Learning a continuous hidden variable model for binary data

Daniel D. Lee, Haim Sompolinsky

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

A directed generative model for binary data using a small number of hidden continuous units is investigated. A clipping nonlinear-ity distinguishes the model from conventional principal components analysis. The relationships between the correlations of the underlying continuous Gaussian variables and the binary output variables are utilized to learn the appropriate weights of the network. The advantages of this approach are illustrated on a translationally invariant binary distribution and on handwritten digit images.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 11 - Proceedings of the 1998 Conference, NIPS 1998
PublisherNeural information processing systems foundation
Pages515-521
Number of pages7
ISBN (Print)0262112450, 9780262112451
StatePublished - 1999
Event12th Annual Conference on Neural Information Processing Systems, NIPS 1998 - Denver, CO, United States
Duration: 30 Nov 19985 Dec 1998

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258

Conference

Conference12th Annual Conference on Neural Information Processing Systems, NIPS 1998
Country/TerritoryUnited States
CityDenver, CO
Period30/11/985/12/98

Fingerprint

Dive into the research topics of 'Learning a continuous hidden variable model for binary data'. Together they form a unique fingerprint.

Cite this