We consider distributed estimation of the inverse covariance matrix, also called the concentration matrix, in Gaussian graphical models. Traditional centralized estimation often requires iterative and expensive global inference and is therefore difficult in large dis- tributed networks. In this paper, we propose a general framework for distributed estima- tion based on a maximum marginal likeli- hood (MML) approach. Each node indepen- dently computes a local estimate by maximiz- ing a marginal likelihood defined with respect to data collected from its local neighbor- hood. Due to the non-convexity of the MML problem, we derive and consider solving a convex relaxation. The local estimates are then combined into a global estimate with- out the need for iterative message-passing be- tween neighborhoods. We prove that this re- laxed MML estimator is asymptotically con- sistent. Through numerical experiments on several synthetic and real-world data sets, we demonstrate that the two-hop version of the proposed estimator is significantly better than the one-hop version, and nearly closes the gap to the centralized maximum likeli- hood estimator in many situations.
|Original language||American English|
|Number of pages||9|
|Journal||Journal of Machine Learning Research|
|State||Published - 2013|
|Event||16th International Conference on Artificial Intelligence and Statistics, AISTATS 2013 - Scottsdale, United States|
Duration: 29 Apr 2013 → 1 May 2013
Bibliographical notePublisher Copyright:
Copyright 2013 by the authors.