Abstract
The probability of error in a Monte Carlo integration is usually taken to decrease inversely with the number n of sampling points used. It is shown empirically that the probability of error is actually exponentially small in the number of points and a bound on this error is derived by analytical considerations. The analytical bound is very tight. The derivation is based on the maximum entropy formalism which shows that the optimal sampling distribution is one of maximal entropy. The theoretical error bound is of the form exp(-nDS) with the magnitude DS of the exponent being determined by a relevant entropy. DS does depend on the variance of the function being sampled. This bound is valid whether the Monte Carlo sampling is over a uniform distribution or is weighted. Explicit computational examples, which demonstrate that the empirical probability of error does decline exponentially with n and that the rate of decline is tightly bounded by DS, are provided.
Original language | English |
---|---|
Pages (from-to) | 303-317 |
Number of pages | 15 |
Journal | Open Systems and Information Dynamics |
Volume | 5 |
Issue number | 4 |
DOIs | |
State | Published - Dec 1998 |