CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Monte-Carlo utility estimates for Bayesian reinforcement learning

Christos Dimitrakakis (Institutionen för data- och informationsteknik, Datavetenskap, Algoritmer (Chalmers))
52nd IEEE Conference on Decision and Control, CDC 2013, Florence, Italy, 10-13 December 2013 (0191-2216). p. 7303-7308. (2013)
[Konferensbidrag, refereegranskat]

This paper introduces a set of algorithms for Monte-Carlo Bayesian reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the Bayes-optimal value function is employed to construct an optimistic policy. Secondly, gradient-based algorithms for approximate upper and lower bounds are introduced. Finally, we introduce a new class of gradient algorithms for Bayesian Bellman error minimisation. We theoretically show that the gradient methods are sound. Experimentally, we demonstrate the superiority of the upper bound method in terms of reward obtained. However, we also show that the Bayesian Bellman error method is a close second, despite its significant computational simplicity.



Den här publikationen ingår i följande styrkeområden:

Läs mer om Chalmers styrkeområden  

Denna post skapades 2013-12-17. Senast ändrad 2016-04-29.
CPL Pubid: 189617

 

Läs direkt!

Lokal fulltext (fritt tillgänglig)

Länk till annan sajt (kan kräva inloggning)