Deterministic Finite-Memory Bias Estimation

Tomer Berg, Or Ordentlich, Ofer Shayevitz

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

In this paper we consider the problem of estimating a Bernoulli parameter using finite memory. Let X1, X2, . . . be a sequence of independent identically distributed Bernoulli random variables with expectation θ, where θ ∈ [0, 1]. Consider a finite-memory deterministic machine with S states, that updates its state Mn ∈ {1, 2, . . ., S} at each time according to the rule Mn = f(Mn-1, Xn), where f is a deterministic time-invariant function. Assume that the machine outputs an estimate at each time point according to some fixed mapping from the state space to the unit interval. The quality of the estimation procedure is measured by the asymptotic risk, which is the long-term average of the instantaneous quadratic risk. The main contribution of this paper is an upper bound on the smallest worst-case asymptotic risk any such machine can attain. This bound coincides with a lower bound derived by Leighton and Rivest, to imply that Θ(1/S) is the minimax asymptotic risk for deterministic S-state machines. In particular, our result disproves a longstanding Θ(log S/S) conjecture for this quantity, also posed by Leighton and Rivest.

Original languageEnglish
Pages (from-to)566-585
Number of pages20
JournalProceedings of Machine Learning Research
Volume134
StatePublished - 2021
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021

Bibliographical note

Funding Information:
This work was supported by the ISF under Grants 1791/17 and 1495/18.

Publisher Copyright:
© 2021 T. Berg, O. Ordentlich & O. Shayevitz.

Keywords

  • Learning with Memory Constraints
  • Minimax Estimation
  • Parametric Estimation

Fingerprint

Dive into the research topics of 'Deterministic Finite-Memory Bias Estimation'. Together they form a unique fingerprint.

Cite this