Abstract
Policy iteration methods are important but often computationally expensive approaches for solving certain stochastic optimization problems. Modified policy iteration methods have been proposed to reduce the storage and computational burden. The asymptotic speed-of-convergence of such methods is, however, not well understood. In this paper we show how modified policy iteration methods may be constructed to achieve a preassigned rate-of-convergence. Our analysis provides a framework for analyzing the local behavior of such methods and provides impetus for perhaps more computationally efficient procedures than currently exist.
Original language | English |
---|---|
Pages (from-to) | 243-246 |
Number of pages | 4 |
Journal | Operations Research Letters |
Volume | 3 |
Issue number | 5 |
DOIs | |
State | Published - Dec 1984 |
Keywords
- dynamic programming
- Markov chains