Bayesian Q-learning

Richard Dearden*, Nir Friedman, Stuart Russell

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

274 Scopus citations


A central problem in learning in complex environments is balancing exploration of untested actions against exploitation of actions that are known to be good. The benefit of exploration can be estimated using the classical notion of Value of Information - the expected improvement in future decision quality that might arise from the information acquired by exploration. Estimating this quantity requires an assessment of the agent's uncertainty about its current value estimates for states. In this paper, we adopt a Bayesian approach to maintaining this uncertain information. We extend Watkins' Q-learning by maintaining and propagating probability distributions over the Q-values. These distributions are used to compute a myopic approximation to the value of information for each action and hence to select the action that best balances exploration and exploitation. We establish the convergence properties of our algorithm and show experimentally that it can exhibit substantial improvements over other well-known model-free exploration strategies.

Original languageAmerican English
Number of pages8
StatePublished - 1998
Externally publishedYes
EventProceedings of the 1998 15th National Conference on Artificial Intelligence, AAAI - Madison, WI, USA
Duration: 26 Jul 199830 Jul 1998


ConferenceProceedings of the 1998 15th National Conference on Artificial Intelligence, AAAI
CityMadison, WI, USA


Dive into the research topics of 'Bayesian Q-learning'. Together they form a unique fingerprint.

Cite this