## Abstract

In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix X_{0} from n < MN measurements yi = Tr(A_{i} X_{0}), where each A_{i} is an M-by-N measurement matrix with i.i.d. random entries, i = 1, . . . , n. We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker—a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object X_{0} to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the (r, n) plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers.

Original language | American English |
---|---|

Pages (from-to) | 7200-7205 |

Number of pages | 6 |

Journal | Proceedings of the National Academy of Sciences of the United States of America |

Volume | 115 |

Issue number | 28 |

DOIs | |

State | Published - 10 Jul 2018 |

### Bibliographical note

Publisher Copyright:© 2018 National Academy of Sciences. All rights reserved.

## Keywords

- Approximate message passing
- Compressed sensing
- Matrix completion
- Nuclear norm minimization
- Singular-value shrinkage