Skip to content. | Skip to navigation

Personal tools
Document Actions

Demonstrator CLEVER-K3 2012

Documentation of demo CLEVER-K3, the “kitchen scenario” demonstrator: machine-learning integrated models of abstraction, novelty, hierarchical architectures.

We have made substantial progress toward the goal of WP7 demonstrators, through development of machine learning methods that solve subproblems required for any non-trivial robotic behaviors, and through integration of the learning methods developed in other workpackages within MoBeE. The specific goal of T7.5 was achieved through the AutoIncSFA-MoBeE-path planner combination, which successfully learned a repertoire of skills through intrinsic motivation, and learned to chain these skills in a compound reaching-pickup-place task. We are currently extending the methods developed over the last year with the ability to learn different skills through intrinsic motivation.

Objectives

The K3 scenario from the Work Plan is described as follows: “At the end of the third year, we will present a demonstrator in which a robot, CLEVER-K3, is capable of autonomously developing compound skills on the basis of pre-existing skills and intrinsic motivations. In particular the integrated model, containing adaptive mechanisms at all levels, should gradually develop compound skills, such as picking something up, putting a fork in a bowl, piling dishes, etc. by assembling other simpler skills such as looking at an object, driving the limb to touch an object, envelop an object with the hand, driving the hand to a particular location, opening all the hand’s fingers, etc. A concrete ability we will try to achieve is to move cutlery distributed over the workspace into one pile. At this stage, algorithms developed in WP4 will aid the formation of goals for compound skills on the basis of novelty detection algorithms developed in WP5, and the whole repertoire of skills will be autonomously organised within the hierarchical architectures developed in WP6 without interference and possibly with generalisation.”
 

Background and open issues related to IM-CLeVeR

A major challenge for this task is the integration of the methods and models developed in the work packages WP4, WP5 and WP6 on the iCub robot. This requires a safe environment for experimenting with the iCub robot, a specification of how modules should interact, robust testing environments for the individual modules developed in the work packages, and an integrated experimental setup for benchmarks and demonstration. Some of these issues were solved during the last project periods, some were improved during the current project period and some are still under development.
 

Specific goals

The goal of the third demonstrator is a robot which develops compound skills by intrinsic motivations based on adaptive pre-existing skills developed in WP4-6. Several efforts were combined and integrated to demonstrate this kind of learning on a robot:
  • IDSIA wrote the K3 plan for collaboration for the K3 demonstrator (Pape et al, 2011). This document specified IDSIA’s plan for the K3 demonstrator, based on the CLeVeR-K1 blueprint for machine learning and the CLEVER-K2 functional architecture for learning a repertoire of actions through intrinsic motivation. Together with FIAS and UU, the proposed plan was developed into concrete learning scenarios for the K3 demonstrator. 
  • IDSIA integrated several learning approaches developed in the other WPs and demonstrated their role in skill learning on the iCub robot. The resulting robotic demonstrators were recorded and presented at the 2012 review meeting as the K3 demonstrator movie. 
  • Exchange of researchers between the partners, organization of workshops and attending relevant workshops organized by other partners. UU and FIAS has send research staff to IDSIA before the last review meeting to assist in the integrating of FIAS’s novelty detection method and UU’s intrinsic motivation based repertoire of actions. In addition, FIAS and IDSIA worked together on a learning method to perform robust image segmentation (Leitner et al, 2012b). A researcher from IDISA has spent a week at FIAS to collaborate on this research topic.
 

Approach and methods

The iCub robot has limited reaching and grasping accuracy, limited means of visual input (low-res cameras), limited touch-sensing capability (partially functional capacitance sensors), and limited out-of-the-box software support for performing object-manipulation activities. The focus of IM- CLeVeR is on developmental learning through intrinsic motivation, not on directly addressing technical limitations of the iCub. Still, demonstrating any non-trivial robotic behaviors requires tackling a huge number of technical robotic issues. The efforts in this WP have therefore focused on the integration methods that solve vision and object-manipulation subtasks through machine learning. While all of these methodsaddress crucial subtasks of relevant robotic skills, not all methods learn through intrinsic motivation yet. However each of these methods is based on our ongoing work in machine learning, and opens up new research directions for both robotics and machine learning. In particular, we have developed:
  • The Modular Behavioral Environment for Humanoids and other Robots (MoBeE, Frank et al, 2012), which provides reflex-like behavior, allows on-line path planning in a dynamic environment and facilitates safe, curiosity-driven exploration of the workspace of real physical robots.
  • icVision, a Modular Vision System for Cognitive Robotics Research (Leitner et al, 2012a), which provides highly-robust vision-based object recognition and localization in 2D and 3D. The software is based on a novel genetic-programming approach that uses functions from the OpenCV computer vision library as building blocks.
  • A new roadmap planning algorithm based on Natural Evolutionary Strategies that allows flexible planning and execution of many object-manipulation behaviors, such as planning around obstacles, bimanual grasping and manipulation, null-space exploitation, etc.
  • We integrated the AutoIncSFA algorithm (WP4) with our planning algorithms and MoBeE. AutoIncSFA was able to develop a repertoire of reaching and grasping skill through intrinsic motivation, based on raw vision input and actions facilitated by the roadmap planner.

Results

The demonstrator was presented as a movie showing the various components that have been developed and integrated. The movie shows the iCub’s acquired capabilities through different stages of learning, as well as compound object manipulation tasks. Given the potential impact of a movie for communicating our collaborative achievements, we have made the movie available online: http://robotics.idsia.ch/im-clever/ 
 
The final movie shows the iCub as it learns vision and object-manipulation skills, learns to combine those skills, and finally performs a compound K3 demonstrator task. Subtasks that are learned are: 
  • iCub learns full-body motion planning. 
  • iCub learns to full-body motion planning while avoiding collisions. 
  • iCub learns to locate and look at (novel) objects from vision. 
  • iCub learns to associate visual events with hand movements, leading to reach-and-grasp behaviors. 
  • iCub learns to associate object motion with hand motion, leading to object-displacement behaviors. 

Apart from the learned tasks, there are several other tasks that the robot performs for the demonstrator that are not learned. Some of these were clear from the beginning of the project, such as collision prevention (which is done by MoBeE), pre-grasp pose computation, and grasping, while others only became clear during the development of the learning scenario’s, such as planning the movement of the inactive arm and the head, and restricting the motion range of the active arm. 
 

Toward Intelligent Humanoids

 

Toward Intelligent Humanoids | iCub 2012 from IDSIA on Vimeo.

 

iCubDemo2012_large

Selected bibliography:


Frank, M., Leitner, J., Stollenga, M., Kaufmann, G., Harding, S., Förster, A., Schmidhuber, J. (2012). The Modular Behavioral Environment for Humanoids and other Robots (MoBeE), 9th International Conference on Informatics in Control, Automation and Robotics (ICINCO). Rome, Italy.

Leitner, J., Harding, S., Frank, M., Förster, A., and Schmidhuber, J. (2012a). icVision: A Modular Vision System for Cognitive Robotics Research. In Proceedings of the International Conference on Cognitive Systems.

Leitner, J., Chandrashekhariah, P., Harding, S., Frank, M.,Förster, A., Schmidhuber, J. , Triesch, J. (2012b). Autonomous Learning of Robust Visual Object Detection. (work in progress)

 

Pape, L., Ring, M., Frank, M., Förster, A., Schmidhuber, J. (2011). IM-CLeVeR-K3 plan for collaboration, internal report of the EU-funded Integrated Project “IM-CLeVeR – Intrinsically Motivated Cumulative Learning Versatile Robots”.