Finite sample performance of linear least squares estimation

Michael Krikheli*, Amir Leshem

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Linear Least Squares is a well known technique for parameter estimation, which is used even when sub-optimal, because of its very low computational requirements and the fact that exact knowledge of the noise statistics is not required. Surprisingly, bounding the probability of large errors with finitely many samples has been left open, especially when dealing with correlated noise with unknown covariance. In this paper we analyze the finite sample performance of the linear least squares estimator. Using these bounds we obtain accurate bounds on the tail of the estimator's distribution. We show the fast exponential convergence of the number of samples required to ensure a given accuracy with high probability and provide explicit formulas. We analyze a sub-Gaussian setting with a fixed or random design matrix of the linear least squares problem. We also extend the results to the case of a martingale difference noise sequence. Our analysis method is simple and uses simple L type bounds on the estimation error. We demonstrate that our results are tighter than previously proposed bounds. The proposed bounds make it possible to predict the number of samples required for least squares estimation even when the least squares is sub-optimal and is used for computational simplicity.

Original languageAmerican English
Pages (from-to)7955-7991
Number of pages37
JournalJournal of the Franklin Institute
Volume358
Issue number15
DOIs
StatePublished - Oct 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 The Franklin Institute

Fingerprint

Dive into the research topics of 'Finite sample performance of linear least squares estimation'. Together they form a unique fingerprint.

Cite this