Groups and experiments

From Wiki for iCub and Friends
Revision as of 16:50, 27 July 2010 by Vvv10 (talk | contribs) (CHRIS)
Jump to: navigation, search

Tentative group formation

Here's a list of possible topics & groups as formed before the school. If you're interested in any of the topics then please add your name here. We'll be posting and adding material in preparation for meeting in Sestri Levante.

  • The CHRIS project group
  • The force control group
  • The Lego-Builders group
  • "Hand"Some
  • Attention & vision: aka eMorph group
  • The ITALK project group
  • New people


This group works on a CHRIS project scenario (TBD).

CHRIS is a FP7 EU project on human-robot safe interaction. The scenario we have already designed involves a learning task where a novel object is presented to iCub which in turn explores it and finally executes some predefined actions (grasp, tap, touch). Here's a video we recorded. To augment the interactivity level and improve safety force control is exploited. We would like possibly to enrich iCub capabilities with some new behaviors such as action recognition, object segmentation based on motion cue (maybe as result of exploration), tactile-based grasp, and whatever helps iCub growing :)

Here's the participants:

  • Ugo Pattacini
  • Lorenzo Natale
  • Andre Luvizotto
  • Zenon Mathews
  • Damien Duff
  • Juan Victores
  • Chris Larcombe
  • Stéphane Lallée
  • Jason Leake


Here are some experiments that have been mooted:

Learn and chain affordances: Using the motor primitives library we can make the robot try different things with an object and learn what is likely to happen to the object. Because we use a simple appearance-based object model we can probably call this affordances ;). We propose to represent the approximately 10d state, object, action, state mapping using the GMM library already in iCub. The aim is to have icub learn object affordances and go on to use these affordances for the basis of more sophisticated exploration. (this part: Damien Duff, Juan Victores, Chris Larcombe). This became VVV10 iFumble.

Learning by demonstration with force control: Adapt the software in to use the force torque sensor rather than the tactile sensor and glove previously used at EPFL to do learning by demonstration. A movie of the "old way": (this part: not being done).

A persistent dynamic memory: building a persistent dynamic memory of objects that are encountered by the robot in the environment. The states of the objects should be updated when they change and when perceived again. Unexpected objects can trigger attention. Objects are learned and recognized using either Temporal population coding (TPC) using the neural simulator IQR, or commercial software. The idea is that the robot is "aware" of existence of objects that are beyond current perception. Also, objects that are not perceived for longer time periods will be forgotten. Sensori input can be multimodal: visual, auditory, tactile, ... (this part: Andre Luvizotto, Zenon Mathews, Stéphane Lallée). This became VVV10 iFumble.

Simple sequence learning: once the above memory is available, and chain affordance learning, the robot could be taught to learn sequences of actions involving objects in memory.

Force control

This group works on using the iCub f/t sensors to implement joint torque control via remapping and dynamics modeling. Hopefully it will also work on some applications such as autonomous lego block insertion and/or force controlled crawling.

Peg in the hole demo: the plan is to grasp an object on the table, using force to detect the table and the object. Then to put the object inside the hole. At first we can simply put the object on the target position (hole); then using force we can insert it in the hole. We surely need a contribution from people working in vision to detect object and hole properly. To simplify, the object can be a simple monochromatic ball; the target can be a monochromatic circle at the beginning (I still have to buy the hole thing ^^) We can take inspiration from a similar demo... see a video here.

iDyn: as you probably know, a library for dynamics has been recently developed: iDyn. You can find the documentation here. In iCub repository under main/src/libraries/iDyn/tutorials you can find some examples showing how to use it. I will add more examples as the library is quite big... for any questions ask Serena or Matteo. It is still under development, so if you have remarks or suggestions you are more than invited to tell us!


  • Francesco Nori
  • Matteo Fumagalli
  • Marco Randazzo
  • Juan Victores
  • Serena Ivaldi
  • Cristiano Alessandro
  • Naveen Kuppussuamy
  • Giacomo Spigler


New information and updates in this page: VVV10 lego builders

Anyone interested in working with us is
welcomed to join us! We want to have fun making a
cool demo by the end of the school!

This group is working on doing everything necessary :)
to put two pieces of Lego together _autonomously_.

This includes finding the pieces, grasping them,
re-detecting the pieces in the hand, putting the pieces together.

We will collaborate with the force control group,
to put the Lego pieces together correctly (without breaking the arms).

To find the Legos on the table, we are working on a simple
AR-Toolkit based program, and a kinematic model of the lego pieces.
Maybe we should also use the cool fingertip sensors for the final approach?

Last year, we proved that this is possible with tele-operation.
Watch the video in [youtube].

Since we (Federico and Alexis) have come to prior
summer-schools, we are also glad to help or
give guidance to new people. Just come over!


  • Alexis Maldonado
  • Federico Ruiz
  • Salomon
  • Giacomo Spigler
  • Jesus Conesa
  • Serena Ivaldi


Please Visit [Hand"Some" Group Page] for more info.

We would like to implement impedance behaviours coupling both the hands. We would work under the Force Control group and share tools and code with everyone.

You are more than welcome to hack with Us! :D So far, we are YARP and iCub Noobs so any assistance the Wise Ones can give is greatly appreciated.

Attention & vision

This group works on rationalizing some of the visual processing and the attention system in logpolar images.


  • Giorgio Metta
  • Francesco Rea
  • Vadim Tikhanoff
  • Steve Hart
  • Bjorn Browatzki

It is worth exploring new techniques for visual attention on the humanoid iCub robot. This perspective allows us to tackle and overcome typical problems that arise in robotic applications.

A biological approach might help to accomplish this challenging task. We propose to start with the well-known Itti & Koch framework developed to model the attentional systems of primates. In this approach, different clues (chrominance uniqueness, luminance uniqueness, etc.) are combined together and the resulting most salient region becomes the WTA region. This region can guide the iCub to attend to different features that will be useful in performing different behaviours (tracking, grasping, etc.).

This framework is enriched with a biological inspired algorithm designed to extract colourful salient blobs from the robot's cameras. This works in parallel with the rest of visual attention structure and can be combined with feedback from higher levels of attention, behaviour, and/or reasoning.

Another area of interest in this group is how the iCub can learn control policies for efficiently foveating on and tracking salient objects. We propose an adaptive approach where the robot can explore "saccade" and "smooth-pursuit" actions with its eyes and its head to accomplish these goals. Two specific questions questions will be addressed: (1) can the robot learn models concerning when a given action is likely to succeed, and (2) what sequences of actions will efficiently solve the task?

New ideas and suggestion are welcome in this work-group. The area is challenging and possible help in the process of enhancing what is already available will be well appreciated.


This group will try to run two ITALK experiments carried out in simulation on the real robot. The two experiments are:

  • Point, move, and touch experiment (Sugita-Tani experiment)
  • Left and Right experiment


  • Tomassino Ferrauto
  • Onofrio Gigliotta
  • Francesco Nori
  • all ITALK people (in theory)

Status of the work

  • Fixed software in the ITALK repository to work with the latest updates in the iCub repository.
  • Point, move and touch experiment:
    • Started writing modules. We are currently working on the module executing the neural network spreading. After this is done, we need to write other modules to pre-process sensory data which is then fed to the neural network and to translate the neural network output into commands for the iCub.
  • Left and Right experiment:
    • Working on code to have it compile and run with the latest updates both in the iCub and the ITALK repositories. After that the experiment will be tested again on the simulator (the one being developed in the ITALK project by Gianluca Massera, see here) and then (hopefully) on the iCub.

New people

All the others who potentially need to train on YARP first.


  • Bruno Nery (some experience with YARP, but not with the iCub)
  • Erick Swere
  • Paolo Tommasino
  • Fouzhan Hosseini
  • Mandana Hamidi
  • Onofrio Gigliotta
  • Kiril Kiryazov
  • Vikram Narayan
  • Joseph Salini
  • Cem Karaoguz