VVV09 The 3D Task Force Action Duo

From Wiki for iCub and Friends
Jump to: navigation, search

Whoa! You reached the 3D Task Force Action Duo Official Web Page.

The 3DTFAD aims at convicing other groups and people that stereo vision is useful for their tasks. It is composed of the following duo:

  • Federico "funny jokes" Tombari
  • Harold "AeroSpace" Martinez

"A dirty job, but someone has to do it"

We have implemented a 3D tracker based on stereo vision.

  • The first step is based on stereo calibration, in order to perform proper rectification (warping) of the two views. We want to have rectified pairs also when the eyes of the robot move in different positions. Hence, we performed different calibrations with different positions of the eyes and we now try to interpolate the resulting homographies with the deviations of the eyes (due to non-perfect alignment of the cameras) which have been estimated experimentally.
  • Given rectified stereo pairs, we perform stereo matching in order to get a disparity map. The stereo matching module is based on a simple, though very efficient, stereo matching algorithm (based on fixed correlation windows).
  • The disparity map is used to compute a range image using the intrinsic and extrinsic calibration parameters.
  • In addition, the disparity map is fed into a "visual attention module" that has been implemented during the summer school. Whenever there is something "interesting" (i.e. close) to the robot, this module outputs the 2D coordinates of the center of mass of what stimulates the iCub's attention. If nothing is within the visual attention space, then nothing is being sent. The visual attention module relies on depth filters, temporal smoothing and image processing tools to handle the noisy disparity data.
  • The output of the visual attention module serves as input for a tracker that aims at following the object (or whatever it is) that moves within the visual attention field. Currently, there are some constraints on the head movements. In particular, for what concerns the eyes we currently adjust only the vergence so to perform tracking along the "z" direction, while the head joints are controlled so to track the object in the "x,y" plane.


Demo

There's a demo on Youtube (thanks Katrin) showing the performance of the tracker:

Demo

Here's another demo we took on-the-fly in an astonishing 176x144 resolution (we had to cut off the audio since Harold was talking at the phone while filming)

Demo2