CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Cover Tree Bayesian Reinforcement Learning

N. Tziortziotis ; Christos Dimitrakakis (Institutionen för data- och informationsteknik, Datavetenskap (Chalmers)) ; K. Blekas
Journal of machine learning research (1532-4435). Vol. 15 (2014), p. 2313-2335.
[Artikel, refereegranskad vetenskaplig]

This paper proposes an online tree-based I3ayesian approach for reinforcement learning. For inference, WC employ a generalised context tree model. This defines a distribution On multivariate Gttussian piecewise-linear models, vhich can be updated in closed form. The tree structure itself k constructed using the cover tree inethocl, v1iidi remains efficient in high dimensional spaces. We cOITIbine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unkritAVri environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with a Gaussian process model, a linear model and simple least squares policy iteration.

Nyckelord: Bayesian inference, non-parametric statistics, reinforcement learning

Denna post skapades 2014-12-19. Senast ändrad 2015-01-08.
CPL Pubid: 208659


Läs direkt!

Lokal fulltext (fritt tillgänglig)

Institutioner (Chalmers)

Institutionen för data- och informationsteknik, Datavetenskap (Chalmers)


Datavetenskap (datalogi)

Chalmers infrastruktur