CPL - Chalmers Publication Library
| Utbildning | Forskning | Styrkeområden | Om Chalmers | In English In English Ej inloggad.

Rollout sampling approximate policy iteration

Christos Dimitrakakis (Institutionen för data- och informationsteknik, Datavetenskap, Algoritmer (Chalmers)) ; M.G. Lagoudakis
Machine Learning (0885-6125). Vol. 72 (2008), 3, p. 157-171.
[Artikel, refereegranskad vetenskaplig]

Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.

Nyckelord: Approximate policy iteration , Bandit problems , Classification , Reinforcement learning , Rollouts , Sample complexity



Den här publikationen ingår i följande styrkeområden:

Läs mer om Chalmers styrkeområden  

Denna post skapades 2013-12-17. Senast ändrad 2015-01-08.
CPL Pubid: 189669

 

Läs direkt!


Länk till annan sajt (kan kräva inloggning)