Transition-Independent Decentralized Markov Decision Processes

  • Raphen Becker*
  • , Shlomo Zilberstein
  • , Victor Lesser
  • , Claudia V. Goldman
  • *Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

81 Scopus citations

Abstract

There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied together through a global reward function that depends upon both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.

Original languageEnglish
Pages41-48
Number of pages8
DOIs
StatePublished - 2003
Externally publishedYes
EventProceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03 - Melbourne, Vic., Australia
Duration: 14 Jul 200318 Jul 2003

Conference

ConferenceProceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03
Country/TerritoryAustralia
CityMelbourne, Vic.
Period14/07/0318/07/03

Keywords

  • Decentralized MDP
  • Decision-theoretic planning

Fingerprint

Dive into the research topics of 'Transition-Independent Decentralized Markov Decision Processes'. Together they form a unique fingerprint.

Cite this