Maximum entropy error bound for Monte Carlo sampling

F. Remacle, R. D. Levine*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

The probability of error in a Monte Carlo integration is usually taken to decrease inversely with the number n of sampling points used. It is shown empirically that the probability of error is actually exponentially small in the number of points and a bound on this error is derived by analytical considerations. The analytical bound is very tight. The derivation is based on the maximum entropy formalism which shows that the optimal sampling distribution is one of maximal entropy. The theoretical error bound is of the form exp(-nDS) with the magnitude DS of the exponent being determined by a relevant entropy. DS does depend on the variance of the function being sampled. This bound is valid whether the Monte Carlo sampling is over a uniform distribution or is weighted. Explicit computational examples, which demonstrate that the empirical probability of error does decline exponentially with n and that the rate of decline is tightly bounded by DS, are provided.

Original languageEnglish
Pages (from-to)303-317
Number of pages15
JournalOpen Systems and Information Dynamics
Volume5
Issue number4
DOIs
StatePublished - Dec 1998

Fingerprint

Dive into the research topics of 'Maximum entropy error bound for Monte Carlo sampling'. Together they form a unique fingerprint.

Cite this