Skip to content. | Skip to navigation

Personal tools
Document Actions

C. Weber and J. Triesch (2008)

From exploration to planning

In: Artificial Neural Networks - ICANN 2008, ed. by Kurkova, Vera and Kurkova-Pohlova, Vera and Koutnik, Jan, Berlin, Springer (ISBN: 978-3-540-87558-1).

Learning and behaviour of mobile robots faces limitations. In reinforcement learning, for example, an agent learns a strategy to get to only one specific target point within a state space. However, we can grasp a visually localized object at any point in space or navigate to any position in a room. We present a neural network model in which an agent learns a model of the state space that allows him to get to an arbitrarily chosen goal via a short route. By randomly exploring the state space, the agent learns associations between two adjoining states and the action that links them. Given arbitrary starting and goal positions, routefinding is done in two steps. First, an activation gradient spreads around the goal position along the associative connections. Second, the agent uses state-action associations to determine the actions leading to ascend the gradient toward the goal. All mechanisms are biologically justifiable.
Relevant for: WP6 hierarchical architectures. A neural-network planner.