Learning in a multiagent environment can help agents improve their performance. Agents, in meeting with others, can learn about the partner’s knowledge and strategic behavior. Agents that operate in dynamic environments could react to unexpected events by generalizing what they have learned during a training stage. In this paper, we propose several learning rules for agents in a multiagent environment. Each agent acts as the teacher of its partner. The agents are trained by receiving examples from a sample space; they then go through a generalization step during which they have to apply the concept they have learned from their instructor. Agents that learn from each other can sometimes avoid repeatedly coordinating their actions from scratch for similar problems. They will sometimes be able to avoid communication at run-time, by using learned oordination concepts.
|Original language||American English|
|Title of host publication||Adaption and Learning in Multi-Agent Systems - IJCAI 1995 Workshop, Proceedings|
|Editors||Gerhard Weib, Sandip Sen|
|Number of pages||12|
|State||Published - 1996|
|Event||Workshop on Adaptation and Learning in Multi-Agent Systems, 1995 held as part of 14th International Joint Conference on Artificial Intelligence, IJCAI 1995 - Montreal, Canada|
Duration: 21 Aug 1995 → 21 Aug 1995
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||Workshop on Adaptation and Learning in Multi-Agent Systems, 1995 held as part of 14th International Joint Conference on Artificial Intelligence, IJCAI 1995|
|Period||21/08/95 → 21/08/95|
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 1996.