From Markov Chains to Stochastic Games

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Markov chains1 and Markov decision processes (MDPs) are special cases of stochastic games. Markov chains describe the dynamics of the states of a stochastic game where each player has a single action in each state. Similarly, the dynamics of the states of a stochastic game form a Markov chain whenever the players' strategies are stationary. Markov decision processes are stochastic games with a single player. In addition, the decision problem faced by a player in a stochastic game when all other players choose a fixed profile of stationary strategies is equivalent to an MDP.
Original languageEnglish
Title of host publicationStochastic Games and Applications
EditorsAbraham Neyman, Sylvain Sorin
Place of PublicationDordrecht
PublisherSpringer
Pages9-25
Number of pages17
ISBN (Print)978-94-010-0189-2
DOIs
StatePublished - 2003

Publication series

NameNATO science Series C: Mathematical and physical sciences
PublisherKluwer
Volume570
ISSN (Print)1389-2185

Fingerprint

Dive into the research topics of 'From Markov Chains to Stochastic Games'. Together they form a unique fingerprint.

Cite this