Abstract
This research examines decisions from experience in restless bandit problems. Two experiments revealed four main effects. (1) Risk neutrality: the typical participant did not learn to become risk averse, a contradiction of the hot stove effect. (2) Sensitivity to the transition probabilities that govern the Markov process. (3) Positive recency: the probability of a risky choice being repeated was higher after a win than after a loss. (4) Inertia: the probability of a risky choice being repeated following a loss was higher than the probability of a risky choice after a safe choice. These results can be described with a simple contingent sampler model, which assumes that choices are made based on small samples of experiences contingent on the current state.
Original language | English |
---|---|
Pages (from-to) | 155-167 |
Number of pages | 13 |
Journal | Journal of Mathematical Psychology |
Volume | 53 |
Issue number | 3 |
DOIs | |
State | Published - Jun 2009 |
Externally published | Yes |
Keywords
- Case-based reasoning
- Dynamic decision making
- Probability matching
- The recency/hot stove paradox
- Underweighting of rare events