Empirical Investigations

From Wiki for iCub and Friends
Jump to: navigation, search

This page is based directly on Deliverable D2.1: A Roadmap for the Development of Cognitive Capabilities in Humanoid Robots, Section 17

We have designed a series of experiments that investigate specific phylogenetic skills and ontogenetic development processes associated with the scenarios detailed above, especially the early ones. Since we wish to be as faithful as possible to natural development in humans, these investigations are a scripted version of the manner in which a psychologist would interact with a young infant during a series of typical sessions and they set out the behaviour that she or he would expect that infant to exhibit.

In these early experiments, we do not require the iCub to be able to re-position itself by crawling. Instead, the iCub sits in a special chair that gives support to the head and legs while the arms are free to move. We assume that the visual backdrop is a homogeneously coloured field and that the acoustic environment isn’t noisy.

Looking

We begin by establishing the iCub’s capabilites in looking.

  1. Saccades and gaze redirection
    A face pattern is introduced into the peripheral visual field (30° from the centre). The visual angle corresponds to that of a real face at 0.5m. When this happens, the iCub moves the eyes and head to position the face at the centre of the visual field. They both start at the same time, but the eyes arrive first to its new position. When the eyes are at the final position and the head moves there, the gaze stays at the fixation object while the eyes counter rotate until they look straight ahead again. The same thing should also happen when a colourful object (3°– 8° visual angle) is introduced into the visual field or when a sounding object is introduced to the side of the robot (30° – 50°). New objects that the robot has not seen before will attract the gaze more than familiar objects.
  2. Gaze redirection and fixation
    The robot turns its head (10° – 20°) while fixating an object or a face (10° – 30°). The eyes of the robot will then counter rotate so that the gaze is unaffected by the body movements (learning may be involved).
  3. Saccades, gaze redirection, and dynamic fixation (tracking)
    An object moves into the visual field. Its average velocity is 8° – 25°/s. The robot makes a saccade to the object and then starts tracking it. The tracking will involve both head and eyes. When the object makes repetitive turns the robot should turn its eyes with the motion with no lag. When the turn is unexpected, a lag is acceptable but not greater than 0.1 seconds. The amplitude of the gaze adjustments may have smaller amplitude than the object motion and the difference will then be compensated with catch-up saccades to the object. Learning is involved. With training, the amplitude of the gaze adjustments will better adjusted to the object motion.
  4. Minimization of saccade correction by learning: tracking through occlusion
    An object moves in the visual field and gets temporarily occluded behind some other objects. The robot stops the eyes at the disappearance point and then makes a saccade to the other side of the occluder. The saccade will predict when and where the object will appear.

A few notes are in order. First, it is clear that capability for smooth pursuit with prediction is required. Second, performance improvement by learning should be possible. Third, tracking through occlusion implies the modulation of (or action selection from) two distinct capabilities: smooth pursuit and saccade.

Reaching

We next proceed to address the iCub’s ability in reaching. The situation is as above.

  1. Reaching towards a visual target (hand)
    The robot extends one of its arms-hand into the visual field and then turns its head towards it. The robot will move the arm and try to keep its eyes on the hand all the time (again, learning may be involved in this). Both arms should be involved in this activity (first single limbs, then both limbs simultaneously). The robot should touch the other arm or hand when it is looking at it.
  2. Reaching towards a visual target (body)
    The robot moves the arms to different parts of its own body and touches it. The hand opens up before or during the extension of the arm. This activity is carried out both when the robot looks at the different body parts and when it does not. The purpose of this activity is to build a body map (again, learning may be involved). The iCub will also touch body parts that lie outside the visual field.
  3. Reaching towards a visual target (moving object)
    A ball or a cube (4-5 cm in diameter) is presented on a string or stick and gently moved up and down in front of the eyes. The robot turns the eyes and head towards it. It also extends one (or both) arms towards the object. The hand opens up during the extension of the arm and the fingers of the hand extends to make the touch surface larger. When the robot learns to reach, it might be an advantage to make the iCub always start the approach at a similar position. We have observed that the infants tend to retreat the hand closer to the body between attempts to get to the object but they do not seem to have a favourite lateral or vertical starting position. Another simplification of the reaching task is to lock the elbow joint. This has been reported in the literature but we have not observed it. It is possible that in special situations where the object is at a position where it can be attained without adjusting the elbow joint, the infant will only adjust the shoulder joint. When the hand of the robot touches the object, this activity will be repeated again and again with variation (that is, the robot retreats the hand a bit and makes a new approach) (again, learning). If the object is to the right, the right hand will be involved and if the object is to the left, the left hand will be involved. If the object is positioned straight ahead, one or both arms will extend towards it. Note that the focus of pre-reaching activity is on the arm. The hand acts as a feeler.
  4. Learning efficient reaching & learning when not to reach
    The distance and lateral position to the ball or cube is varied from half the length of the arms to 1.5 the length of the arms. The iCub will learn to plan an efficient trajectory to the object. To begin with only a part of the trajectory will be planned ahead. At the end of this part, a new segment will be planned, etc. In the end, a continuous movement to the goal will be performed. If the distance to the object is larger than the arms, the robot will not reach for the object.

Again, some notes are in order. Turning the head toward the arm-hand as it enters the field of view is based on both visual and proprioceptive data. It implies a capability for hand detection and hand localization. The bimanual behaviour should be emergent. Moving the arm to different parts of the iCub body and touching them implies both haptic and force feedback. Note that the iCub is not yet equipped with haptic sensing.

Reach and Grasp

We now proceed to consider reach and grasp. The iCub sits independently.

  1. Reaching to a fixated static object
    Objects of different sizes are introduced into the visual field of the iCub. The iCub extends one or both hands towards the object and then grasps it. The duration of the approach will be 3 seconds or less. The robot hand will slow down towards the end of the approach and just before grasping the object, the velocity will be close to zero. The iCub will fixate the object to be grasped during the approach.
  2. Grasp closure during approach
    The hand will first open up during the approach of the object and then begin to close around it. All fingers will be engaged. To begin with, the hand will open to its full extent during the approach before starting to close. Later on during training, the maximal opening of the hand will be adjusted to the size of the object. The maximum opening of the hand should always be larger than the object to be grasped to make it easier to slide the hand over the object. It is important that the grasping begins before the touch otherwise there is a risk that the hand of the robot will push away the object as a consequence of the touch. The last part of the closing of the hand will take place as the iCub’s hand is in contact with the object. If the object is large (< 10 􀀀 cm diameter) both hands will participate in grasping the object. In order not to have the two hands compete for grasping the object, it might be desirable to develop some laterality.
  3. Matching grasp pose to an object’s axis of symmetry
    Objects of different forms are introduced into the visual field of the iCub (cylinders with a 2 cm and 5 cm diameter, and egg-shaped object with maximum diameter of 6 cm, an irregular object, and a soft and a hard object). The robot-hand will rotate during the approach in order to grasp the object over the most convenient opposition space. If the object is a rod, the grasp will take place around its longitudinal axis.
  4. Reaching to a fixated moving object
    The object to be grasped moves. The velocity of the object motion will vary from 5 to 60 cm/s. The object will either approach on a vertical trajectory or a horizontal one. The hand moves towards a future position of the object where the hand and the object will meet. If the object comes from the left, it is the right arm-hand that will grasp it and if it comes from the right, it is the left arm-hand that will grasp it. The other hand will help to secure the object after the active hand has caught it (or stopped it).
  5. Pincer grasp
    Small round objects (0.5 to 2.0 cm diameter) will be introduced into the visual field. The iCub will then only engage the thumb and the index finger in the act of grasping them.
  6. Bimanual manipulation and experimentation
    After the object is grasped, the robot will examine the object by turning it around. Both hands will participate in this activity. One hand will hold the object in a fixed position while the other hand is moved over it in order to feel its surface and examine its interior. The iCub grasps the object and drops it on the floor while looking. The iCub picks it up again, rubs it on the floor and bangs it against the floor, tries to roll it, squeezes it, and moves it between the hands while looking. Through this activity the robot will build an object representation of familiar objects.
  7. Hand-to-hand transfer
    The object will be transferred from one hand to the other while the robot fixates the object (maybe also transferred repeatedly between the hands). The transfer should be as smooth and continuous as possible. This means that the delivering hand should let go of the object at the same time as the receiving hand grasps it.
  8. Hand and arm object relocation to a fixation point via intermediate landmarks
    After grasping an object, the robot will move it to another position and deposit it there. The robot will turn its gaze towards the goal position of the action while the object is moved there. If the object is moved to its final position via an obstacle, the robot will fixate the obstacle and when the hand with the object has cleared the obstacle, the gaze will go to the final position.

Right hand reaching for objects on the right (and, similarly, left for those on the left) should not be pre-programmed but should be determined through action selection. The counterpart of this is that the right hand should reach for objects moving from the left (and vice versa, left reaching for those moving from the right). All of these behaviours should be a consequence of some predictive or anticipatory capability which modulates the action selection.

Reach and Posture

Once these capabilities have been demonstrated, we move on to consider reaching and posture. In this case, the iCub sits without support:

  1. Exhibiting compensation for inertia and gravity,
  2. Leaning forward,
  3. and using the other hand to counterbalance.

Postural Control in Action

Similarly, the next stage in the development of the iCub deals with postural control in action. Here, the iCub sits independently and moves by crawling:

  1. Crawls and prepares a reach during crawling. The iCub manages a transition from crawling to sitting.
  2. Sitting and balancing.
  3. Balancing during action. The iCub adjusts its posture: the body is stabilized so when the iCub grasps the other hand counter-balances.

Object Containment

The next stage is to consider object containment.

The iCub sits independently in front of two objects, one of them is smaller than the other which is larger and hollow. The smaller object can be fitted into the larger object.

  1. The iCub picks up one of the objects and inspects it visually from several viewpoints. The iCub picks up the other object with the other hand and inspects it from several viewpoints. It then turns one of the object such that it fits into the other one.

Pointing and Gesturing

Finally, we consider pointing and gesturing.

The iCub sits in front of a human partner. An object is situated between them.

  1. The iCub turns head and eyes toward the partner´s face and then towards the object and then towards the partner again. The iCub then opens the hand with the palm up and moves the upper body forward as if wanting the partner to give it the object.

A Comprehensive Experiment

The following experiment is designed to demonstrate the integration of all work-packages.

The robot is sitting in front of a human partner and there are two objects between them. The distance to the partner is 2 metres.

  1. The iCub turns to look at one of the objects with head-eyes. It raises its right arm-hand and points to the attended object. It then assumes a crawling posture and crawls up to the objects. During the last stride the right arm is lifted (predictively).
  2. When it arrives at the object, it assumes a sitting position, grasps the object and hands it to the human partner. This is repeated with the other object.
  3. The human partner then picks up one of the objects and stretches it towards the iCub who opens the hand and grasps the object.
  4. After this, the human partner picks up the other object and hands it to the iCub who transfers the first object to the other hand before receiving it.
  5. Then the human partner turns his/her head and eyes toward one of the objects and points at it. The iCub turns its head and eyes toward the same object. The human partner then extends one of its arms, points to the object and places the hand in a begging posture. The iCub picks up the object and hands it to the human partner.
  6. Now the human partner and the iCub have one object each. The human partner picks up his/her object and drops it into one of two buckets. After this the iCub picks up the other object and drops it into the other bucket (the gaze should move to the goal, not track the action).