Skip to main content

Posterior weighted reinforcement learning with state uncertainty

Tobias Larsen, David S. Leslie, Edmund J. Collins, Rafal Bogacz, Posterior weighted reinforcement learning with state uncertainty. Neural Computation, 22, pp. 1149–1179. May 2010. No electronic version available. External information

Abstract

Reinforcement learning models generally assume that a stimulus is presented that allows a learner to unambiguously identify the state of nature, and the reward received is drawn from a distribution that depends on that state. However in any natural environment the stimulus is noisy. When there is state uncertainty it is no longer immediately obvious how to perform reinforcement learning, since the observed reward cannot be unambiguously allocated to a state of the environment. This article addresses the problem of incorporating state uncertainty in reinforcement learning models. We show that simply ignoring the uncertainty and allocating the reward to the most likely state of the environment results in incorrect value estimates. Furthermore, using only the information that is available before observing the reward also results in incorrect estimates. We therefore introduce a new technique, posterior weighted reinforcement learning, in which the estimates of state probabilities are updated according to the observed rewards (e.g. if a learner observes a reward usually associated with a particular state, this state becomes more likely). We show analytically that this modi ed algorithm can converge to correct reward estimates, and con firm this with numerical experiments. The algorithm is shown to be a variant of the expectation-maximisation algorithm, allowing rigorous convergence analyses to be carried out. A possible neural implementation of the algorithm in the cortico-basal-ganglia-thalamic network is presented and experimental predictions of our model are discussed.

Bibtex entry.

Contact details

Publication Admin