CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Tree exploration for bayesian RL exploration

Christos Dimitrakakis (Institutionen för data- och informationsteknik, Datavetenskap, Algoritmer (Chalmers))
2008 International Conference on Computational Intelligence for Modelling Control and Automation, CIMCA 2008 p. 1029-1034. (2008)
[Konferensbidrag, refereegranskat]

Research in reinforcement learning has produced algorithms for optimal decision making under uncertainty that fall within two main types. The first employs a Bayesian framework, where optimality improves with increased computational time. This is because the resulting planning task takes the form of a dynamic programming problem on a belief tree with an infinite number of states. The second type employs relatively simple algorithmwhich are shown to suffer small regret within a distribution-free framework. This paper presents a lower bound and a high probability upper bound on the optimal value function for the nodes in the Bayesian belief tree, which are analogous to similar bounds in POMDPs. The bounds are then used to create more efficient strategies for exploring the tree. The resulting algorithms are compared with the distribution-free algorithm UCB1, as well as a simpler baseline algorithm on multiarmed bandit problems. © 2008 IEEE.



Denna post skapades 2013-12-17. Senast ändrad 2015-01-08.
CPL Pubid: 189671

 

Läs direkt!


Länk till annan sajt (kan kräva inloggning)