Skip to content. | Skip to navigation

Personal tools
Document Actions

Andrew G Barto and S. Mahadevan (2003)

Recent advances in hierarchical reinforcement learning

Discrete Event Dynamic Systems, 13(4):341–379.

Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.
Reinforcement Learning; Markov Decision Processes; Semi-Markov Decision Processes; Hierarchy; Temporal Abstraction
Relevant for: WP6 hierarchical architectures. This is a general review of some formalisms in the Reinforcement Learning literature for hierarchical Reinforcement Learning.