"Ideal parent" structure learning for continuous variable Bayesian networks

Gal Elidan*, Iftach Nachman, Nir Friedman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Bayesian networks in general, and continuous variable networks in particular, have become increasingly popular in recent years, largely due to advances in methods that facilitate automatic learning from data. Yet, despite these advances, the key task of learning the structure of such models remains a computationally intensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks in the presence of missing values or hidden variables, a scenario that is part of many real-life problems. In this work we present a general method for speeding structure search for continuous variable networks with common parametric distributions. We efficiently evaluate the approximate merit of candidate structure modifications and apply time consuming (exact) computations only to the most promising ones, thereby achieving significant improvement in the running time of the search algorithm. Our method also naturally and efficiently facilitates the addition of useful new hidden variables into the network structure, a task that is typically considered both conceptually difficult and computationally prohibitive. We demonstrate our method on synthetic and real-life data sets, both for learning structure on fully and partially observable data, and for introducing new hidden variables during structure search.

Original languageEnglish
Pages (from-to)1799-1833
Number of pages35
JournalJournal of Machine Learning Research
Volume8
StatePublished - Aug 2007

Keywords

  • Bayesian networks
  • Continuous variables
  • Hidden variables
  • Structure learning

Fingerprint

Dive into the research topics of '"Ideal parent" structure learning for continuous variable Bayesian networks'. Together they form a unique fingerprint.

Cite this