Truncated policy iteration methods

Ron S. Dembo*, Moshe Haviv

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Policy iteration methods are important but often computationally expensive approaches for solving certain stochastic optimization problems. Modified policy iteration methods have been proposed to reduce the storage and computational burden. The asymptotic speed-of-convergence of such methods is, however, not well understood. In this paper we show how modified policy iteration methods may be constructed to achieve a preassigned rate-of-convergence. Our analysis provides a framework for analyzing the local behavior of such methods and provides impetus for perhaps more computationally efficient procedures than currently exist.

Original languageEnglish
Pages (from-to)243-246
Number of pages4
JournalOperations Research Letters
Volume3
Issue number5
DOIs
StatePublished - Dec 1984

Keywords

  • dynamic programming
  • Markov chains

Fingerprint

Dive into the research topics of 'Truncated policy iteration methods'. Together they form a unique fingerprint.

Cite this