Understanding emerging behaviors of reinforcement learning (RL) agents may be difficult since such agents are often trained in complex environments using highly complex decision making procedures. This has given rise to a variety of approaches to explainability in RL that aim to reconcile discrepancies that may arise between the behavior of an agent and the behavior that is anticipated by an observer. Most recent approaches have relied either on domain knowledge, that may not always be available, on an analysis of the agent's policy, or on an analysis of specific elements of the underlying environment, typically modeled as a Markov Decision Process (MDP). Our key claim is that even if the underlying model is not fully known (e.g., the transition probabilities have not been accurately learned) or is not maintained by the agent (i.e., when using model-free methods), the model can nevertheless be exploited to automatically generate explanations. For this purpose, we suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations. Since such transforms are typically based on a symbolic representation of the environment, they can provide meaningful explanations for gaps between the anticipated and actual agent behavior. We formally define the explainability problem, suggest a class of transforms that can be used for explaining emergent behaviors, and suggest methods that enable efficient search for an explanation. We demonstrate the approach on a set of standard benchmarks.
|Original language||American English|
|Title of host publication||Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022|
|Editors||S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh|
|Publisher||Neural information processing systems foundation|
|State||Published - 2022|
|Event||36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States|
Duration: 28 Nov 2022 → 9 Dec 2022
|Name||Advances in Neural Information Processing Systems|
|Conference||36th Conference on Neural Information Processing Systems, NeurIPS 2022|
|Period||28/11/22 → 9/12/22|
Bibliographical noteFunding Information:
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
This research has been partly funded by Israel Science Foundation grant #1340/18 and by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (grant agreement no. 740282).
© 2022 Neural information processing systems foundation. All rights reserved.