Learning Options for an MDP from Demonstrations

The options framework provides a foundation to use hierarchical actions in reinforcement learning. An agent using options, along with primitive actions, at any point in time can decide to perform a macro-action made out of many primitive actions rather than a primitive action. Such macro-actions can be hand-crafted or learned. There has been previous work on learning them by exploring the environment. Here we take a different perspective and present an approach to learn options from a set of experts demonstrations. Empirical results are also presented in a similar setting to the one used in other works in this area.

Author: 
Marco Tamassia
Fabio Zambetta
William L. Raffe
Xiaodong Li
Presented At: 
Australasian Conference on Artificial Life and Computational Intelligence, pp. 226-242. Springer
Year: 
2015
Type: 
Conference Proceedings