VVV09 Manipulation Group

From Wiki for iCub and Friends
Revision as of 15:30, 24 July 2009 by VVV09 (talk | contribs) (Grasping team)
Jump to: navigation, search

A place for meeting/discussion for people interested in doing manipulation with the iCub: Topics:

  • Reaching
  • Grasping
  • 3-D vision

Last year's reaching/grasping group: Link

Ideas / Message wall

  • Demo idea: Two-handed manipulation, putting lego pieces together using both arms of the iCub.
  • A git repository for sharing our code is ready! Please send me (Alexis) your public ssh key (usually ~/.ssh/id_dsa.pub or ~/.ssh/id_rsa.pub). Then you will be able to pull and push to the GIT repository. Instructions below!

Divide and conquer

We had a small informal meeting on Tuesday (21.7.) before lunch and came out with a possible work distribution:

Perception team

This team has the following goals:

  • First we detect the lego pieces on the table, and after grasping them, we need to detect them again in the hand, to see where they really are held.
  • We need a 3D position in the coordinate frame of the cameras (or better, of the iCub).
  • Initially, we can keep the eyes fixed, but moving the head and eyes would be a nice plus.
  • Matteo has a working ball tracker that might be adjusted (if you want to use that, please see: 3D_ball_tracker).
  • We could put the legos together 'with eyes closed' (see first, then move pieces together), or use visual servoing to make the process better.


  • Matteo
  • Jakob
  • Federico T.
  • Giovanni

Here you can find a set of images illustrating possible operating conditions for the grasp demo. They are also in the git repository, under vvvmanipulation/perception/testImages2009_07_21.

Grasping team

This team has the following goals:

  • make a small library of grasps (e.g. 3-finger pinch, 2-finger pinch, power grasp).
  • when a grasp movement is being executed, monitor the positions and currents of the hand -> Detect contact with the object and stop.
  • when holding the object, monitor the currents/positions and figure out if the piece is falling down. -> Learn a classificator using joint positions/currents?
  • Avoid destroying the iCub's hand!


  • Julian
  • Theo
  • Yan
  • Kail

We tried to check for a collision to an object by observing the motor currents, but the data is very noisy. A better way to do is to observe the PID error and thresholding it. We will implement both and check how we can use it.

We are going to provide a grasping library which provides a setting of grasp and pregrasp positions reduced to two degrees of freedom. Furthermore we provide routines for the grasping, including checking encoder data, motor currents and controller error in order to see if a collision (= object grasped) has happend.

Draft of our code structure:

 class Grasp
 vector encoders, currents, PIDError
 vector graspParameters
 bool currentThresholdExeeded[NUMBER_OF_JOINTS]
 double* prePoseA prePoseB, graspPoseA, graspPoseB
 public int toPreGraspPosition(double interpolationParam)
 public int doGrasping(double interpolationParam)
 private int getValues() // currents, encoder data, PID controller error
 public int checkEncoders()
 public int checkPIDError()
 public int checkCollision()
 public int checkCurrents()
 public bool objectSucessfullyGrasped()

The collision checking has been tested on both robots. It works great on the black (evil) robot, but not so well on the white (good) robot. The reason for this is, that the black (evil) robot has position sensors in the finger joints. So a raise of the PID error can be recognized much better.

Reaching team

This team has the following goals:

  • Use inverse kinematics to get the hand in the right position and orientation for grasping.
  • After grasping the piece, bring it close to the face, and rotate it until the perception team finds the exact position of the piece in the hand.
  • Redefine the tool of the arm as the position of the piece, and move in cartesian coordinates based on it.
  • Put the two pieces together, detect the event monitoring the force sensors, or vision?
  • Try the new iKin-based inverse kinematics, done by Ugo Pattacini


  • Alexis
  • Federico
  • Boris

git repository

  • We will use GIT locally (inside the summer school) for now, since accessing the CVS server in sourceforge is very slow. But we will integrate the working system into the iCub repository before we finish next week. (Giorgio's suggestion)

Install git on your laptop, give Alexis your public ssh key, and then do the following:

 git clone gitosis@

You should get a directory called vvvmanipulation

When you make your changes, remember to do:

 git add files
 git commit            #Add some good comment
 git pull --rebase     #This synchronizes with the server
 git push              #Sends your changes to the server
  • Remember to tell git your name and email!: (only once)
 git config --global user.name "James Bond"
 git config --global user.email jb@icub_rules.com
  • The IP of the server changed! Please reconfigure the URL by running this in the repository:
 git config remote.origin.url gitosis@