Estimator Variance in Reinforcement Learning: Theoretical Problems and Practical Solutions, Pendrith M.D., Ryan M.R.K., AAAI-97 Workshop on On-line Search, Providence, RI, 28 July 1997.

In reinforcement learning, as in many on-line search techniques, a large number of estimation parameters (e.g. Q-value estimates for 1-step Q-learning) are maintained and dynamically updated as information comes to hand during the learning process. Excessive variance of these estimators can be problematic, resulting in uneven or unstable learning, or even making effective learning impossible. Estimator variance is usually managed only indirectly, by selecting global learning algorithm parameters (e.g. lambda for TD(lambda) based methods) that are a compromise between an acceptable level of estimator perturbation and other desirable system attributes, such as reduced estimator bias. In this paper, we argue that this approach may not always be adequate, particularly for noisy and non-Markovian domains, and present a direct approach to managing estimator variance, the new ccBeta algorithm. Empirical results in an autonomous robotics domain are also presented showing improved performance using the ccBeta method.

Download full paper (compressed postscript)


Mark Pendrith - pendrith@cse.unsw.edu.au
Malcolm Ryan - malcolmr@cse.unsw.edu.au