Abstract
Researchers in Distributed Artificial Intelligence have suggested that it would be worthwhile to isolate 'aspects of cooperative behavior,' general rules that cause agents to act in ways conducive to cooperation. One kind of cooperative behavior is when agents independently alter the environment to make it easier for everyone to function effectively. Cooperative behavior of this kind might be to put away a hammer that one finds lying on the floor, knowing that another agent will be able to find it more easily later on. We examine the effect a specific 'cooperation rule' has on agents in the multi-agent Tileworld domain. Agents are encouraged to increase tiles' degrees of freedom, even when the tile is not involved in an agent's own primary plan. The amount of extra work an agent is willing to do is captured in the agent's cooperation level. Results from simulations are presented. We present a way of characterizing domains as multi-agent deterministic finite automata, and characterizing cooperative rules as transformations of these automata. We also discuss general characteristics of cooperative state-changing rules. It is shown that a relatively simple, easily calculated rule can sometimes improve global system performance in the Tileworld. Coordination emerges from agents who use this rule of cooperation, without any explicit coordination or negotiation.
Original language | English |
---|---|
Pages | 408-413 |
Number of pages | 6 |
State | Published - 1994 |
Event | Proceedings of the 12th National Conference on Artificial Intelligence. Part 1 (of 2) - Seattle, WA, USA Duration: 31 Jul 1994 → 4 Aug 1994 |
Conference
Conference | Proceedings of the 12th National Conference on Artificial Intelligence. Part 1 (of 2) |
---|---|
City | Seattle, WA, USA |
Period | 31/07/94 → 4/08/94 |