Abstract
In this article we analyze a particular model of control among intelligent agents, that of non-absolute control. Non-absolute control involves a "supervisor" agent that issues orders to a "subordinate" agent. An example might be a human agent on Earth directing the activities of a Mars-based semi-autonomous vehicle. Both agents operate with essentially the same goals. The subordinate agent, however, is assumed to have access to some information that the supervisor does not have. The agent is thus expected to exercise its judgment in following orders (i.e., following the true intent of the supervisor, to the best of its ability). After presenting our model, we discuss the planning problem: how would a subordinate agent choose among alternative plans? Our solutions focus on evaluating the distance between candidate plans, and is appropriate to any scenario in which one agent wants to follow (as much as possible) another agent's plan.
Original language | English |
---|---|
Pages (from-to) | 219-235 |
Number of pages | 17 |
Journal | Group Decision and Negotiation |
Volume | 2 |
Issue number | 3 |
DOIs | |
State | Published - Sep 1993 |
Keywords
- communication
- multiple agents
- non-absolute control
- plan deviation
- planning