Distributed learning of Gaussian graphical models via marginal likelihoods

Zhaoshi Meng, Dennis Wei, Ami Wiesel, Alfred O. Hero

Research output: Contribution to journalConference articlepeer-review

15 Scopus citations


We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large dis- tributed networks. In this paper, we propose a general framework for distributed estima- tion based on a maximum marginal likeli- hood (MML) approach. Each node indepen- dently computes a local estimate by maximiz- ing a marginal likelihood defined with respect to data collected from its local neighbor- hood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate with- out the need for iterative message-passing be- tween neighborhoods. We prove that this re- laxed MML estimator is asymptotically con- sistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likeli- hood estimator in many situations.

Original languageEnglish
Pages (from-to)39-47
Number of pages9
JournalProceedings of Machine Learning Research
StatePublished - 2013
Event16th International Conference on Artificial Intelligence and Statistics, AISTATS 2013 - Scottsdale, United States
Duration: 29 Apr 20131 May 2013

Bibliographical note

Publisher Copyright:
Copyright 2013 by the authors.


Dive into the research topics of 'Distributed learning of Gaussian graphical models via marginal likelihoods'. Together they form a unique fingerprint.

Cite this