Difference between revisions of "VVV09 Manipulation Group"

From Wiki for iCub and Friends
Jump to: navigation, search
(Perception team)
(Perception team)
Line 25: Line 25:
 
* We need a 3D position in the coordinate frame of the cameras (or better, of the iCub).
 
* We need a 3D position in the coordinate frame of the cameras (or better, of the iCub).
 
* Initially, we can keep the eyes fixed, but moving the head and eyes would be a nice plus.
 
* Initially, we can keep the eyes fixed, but moving the head and eyes would be a nice plus.
* Matteo has a working ball tracker that might be adjusted.
+
* Matteo has a working ball tracker that might be adjusted (if you want to use that, please see: [http://mediawiki.isr.ist.utl.pt/wiki/3D_ball_tracker 3D_ball_tracker]).
 
* We could put the legos together 'with eyes closed' (see first, then move pieces together), or use visual servoing to make the process better.
 
* We could put the legos together 'with eyes closed' (see first, then move pieces together), or use visual servoing to make the process better.
  

Revision as of 14:04, 21 July 2009

A place for meeting/discussion for people interested in doing manipulation with the iCub: Topics:

  • Reaching
  • Grasping
  • 3-D vision

Last year's reaching/grasping group: Link

Ideas / Message wall

  • Demo idea: Two-handed manipulation, putting lego pieces together using both arms of the iCub.
  • Please add your name to the participant lists and feel free to make subteams, and reorganize goals/ possible solutions
  • A git repository for sharing our code is coming soon!
  • Let's try to sit together on Wednesday to work together faster

Divide and conquer

We had a small informal meeting on Tuesday (21.7.) before lunch and came out with a possible work distribution:

Perception team

This team has the following goals:

  • First we detect the lego pieces on the table, and after grasping them, we need to detect them again in the hand, to see where they really are held.
  • We need a 3D position in the coordinate frame of the cameras (or better, of the iCub).
  • Initially, we can keep the eyes fixed, but moving the head and eyes would be a nice plus.
  • Matteo has a working ball tracker that might be adjusted (if you want to use that, please see: 3D_ball_tracker).
  • We could put the legos together 'with eyes closed' (see first, then move pieces together), or use visual servoing to make the process better.


Participants:

  • Matteo
  • Jakob
  • Federico T.

Grasping team

This team has the following goals:

  • make a small library of grasps (e.g. 3-finger pinch, 2-finger pinch, power grasp).
  • when a grasp movement is being executed, monitor the positions and currents of the hand -> Detect contact with the object and stop.
  • when holding the object, monitor the currents/positions and figure out if the piece is falling down. -> Learn a classificator using joint positions/currents?
  • Avoid destroying the iCub's hand!

Participants:

  • Julian
  • Theo

Reaching team

This team has the following goals:

  • Use inverse kinematics to get the hand in the right position and orientation for grasping.
  • After grasping the piece, bring it close to the face, and rotate it until the perception team finds the exact position of the piece in the hand.
  • Redefine the tool of the arm as the position of the piece, and move in cartesian coordinates based on it.
  • Put the two pieces together, detect the event monitoring the force sensors, or vision?

Participants:

  • Alexis
  • Federico
  • Boris