The evolution of continuous learning of the structure of the environment

Oren Kolodny*, Shimon Edelman, Arnon Lotem

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Scopus citations


Continuous, 'always on', learning of structure from a stream of data is studied mainly in the fields of machine learning or language acquisition, but its evolutionary roots may go back to the first organisms that were internally motivated to learn and represent their environment. Here, we study under what conditions such continuous learning (CL) may be more adaptive than simple reinforcement learning and examine how it could have evolved from the same basic associative elements. We use agent-based computer simulations to compare three learning strategies: simple reinforcement learning; reinforcement learning with chaining (RL-chain) and CL that applies the same associative mechanisms used by the other strategies, but also seeks statistical regularities in the relations among all items in the environment, regardless of the initial association with food. We show that a sufficiently structured environment favours the evolution of both RL-chain and CL and that CL outperforms the other strategies when food is relatively rare and the time for learning is limited. This advantage of internally motivated CL stems from its ability to capture statistical patterns in the environment even before they are associated with food, at which point they immediately become useful for planning.

Original languageAmerican English
Article number20131091
JournalJournal of the Royal Society Interface
Issue number92
StatePublished - 6 Mar 2014
Externally publishedYes


  • Decision-making
  • Evolution of cognition
  • Foraging theory
  • Representation
  • Statistical learning


Dive into the research topics of 'The evolution of continuous learning of the structure of the environment'. Together they form a unique fingerprint.

Cite this