Abstract
Adaptive learning models are used to predict behavior in repeated choice tasks. Predictions can be based on previous payoffs or previous choices of the player. The current paper proposes a new method for evaluating the degree of reliance on past choices, called equal payoff series extraction (EPSE). Under this method a simulated player has the same exact choices as the player but receives equal constant payoffs from all of the alternatives. Success in predicting the next choice ahead for this simulated player therefore relies strictly on mimicry of previous choices of the actual player. This allows determining the marginal fit of predictions that are not based on the actual task payoffs. To evaluate the reliance on past choices under different models, an experiment was conducted in which 48 participants completed a three-alternative choice task in four task conditions. Two different learning rules were evaluated: an interference rule and a decay rule. The results showed that while the predictions of the decay rule relied more on past choices, only the reliance on past payoffs was associated with improved parameter generality. Moreover, we show that the Equal Payoff Series can be used as a criterion for optimizing parameters resulting in better parameter generalizability.
Original language | English |
---|---|
Pages (from-to) | 75-84 |
Number of pages | 10 |
Journal | Journal of Mathematical Psychology |
Volume | 51 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2007 |
Externally published | Yes |
Bibliographical note
Funding Information:This research was supported in part by the Israel Science Foundation (Grant no. 244/06) and by the Max Wertheimer Minerva Center for Cognitive Studies.
Keywords
- Cognitive models
- Model selection
- Reinforcement learning
- Validity