Abstract
Agents that interact in a distributed environment might increase their utility by behaving optimally given the strategies of the other agents. To do so, agents need to learn about those with whom they share the same world. This paper examines interactions among agents from a game theoretic perspective. In this context, learning has been assumed as a means to reach equilibrium. We analyze the complexity of this learning process. We start with a restricted two-agent model, in which agents are represented by finite automata, and one of the agents plays a fixed strategy. We show that even with this restrictions, the learning process may be exponential in time. We then suggest a criterion of simplicity, that induces a class of automata that are learnable in polynomial time.
Original language | English |
---|---|
Title of host publication | Adaption and Learning in Multi-Agent Systems - IJCAI 1995 Workshop, Proceedings |
Editors | Gerhard Weib, Sandip Sen |
Publisher | Springer Verlag |
Pages | 165-176 |
Number of pages | 12 |
ISBN (Print) | 9783540609230 |
DOIs | |
State | Published - 1996 |
Event | Workshop on Adaptation and Learning in Multi-Agent Systems, 1995 held as part of 14th International Joint Conference on Artificial Intelligence, IJCAI 1995 - Montreal, Canada Duration: 21 Aug 1995 → 21 Aug 1995 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 1042 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | Workshop on Adaptation and Learning in Multi-Agent Systems, 1995 held as part of 14th International Joint Conference on Artificial Intelligence, IJCAI 1995 |
---|---|
Country/Territory | Canada |
City | Montreal |
Period | 21/08/95 → 21/08/95 |
Bibliographical note
Publisher Copyright:© Springer-Verlag Berlin Heidelberg 1996.
Keywords
- Automata
- Distributed artificial intelligence
- Learning
- Repeated games