The misbehavior of reinforcement learning

Gianluigi Mongillo, Hanan Shteingart, Yonatan Loewenstein

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. The computational principles and neural mechanisms underlying operant learning are a subject of extensive experimental and theoretical investigations. Theoretical approaches largely rely on concepts and algorithms from reinforcement learning. The dominant view is that organisms maintain a value function, that is, a set of estimates of the cumulative future rewards associated with the different behavioral options. These values are then used to select actions. Learning in this framework results from the update of these values depending on experience of the consequences of past actions. An alternative view questions the applicability of such a computational scheme to many real-life situations. Instead, it posits that organisms exploit the intrinsic variability in their action-selection mechanism(s) to modify their behavior, e.g., via stochastic gradient ascent, without the need of an explicit representation of values. In this review, we compare these two approaches in terms of their computational power and flexibility, their putative neural correlates, and, finally, in terms of their ability to account for behavior as observed in repeated-choice experiments. We discuss the successes and failures of these alternative approaches in explaining the observed patterns of choice behavior. We conclude by identifying some of the important challenges to a comprehensive theory of operant learning.

Original languageEnglish
Article number6767062
Pages (from-to)528-541
Number of pages14
JournalProceedings of the IEEE
Volume102
Issue number4
DOIs
StatePublished - Apr 2014

Bibliographical note

Funding Information:
This research was supported, in part, by the Perinatology Research Branch , Division of Intramural Research , Eunice Kennedy Shriver National Institute of Child Health and Human Development , National Institutes of Health , Department of Health and Human Services .

Keywords

  • Computational intelligence
  • Markov decision process
  • decision making
  • gradient methods
  • learning (artificial intelligence)
  • learning systems
  • machine learning
  • neural networks
  • reinforcement learning

Fingerprint

Dive into the research topics of 'The misbehavior of reinforcement learning'. Together they form a unique fingerprint.

Cite this