VVV10 lego builders

From Wiki for iCub and Friends
Jump to: navigation, search

Status reports

Thrusday 29.07.2010

We did a demo showing the autonomous grasping of a lego piece with the right arm. The grasping was a simple routine using three fingers. Giacomo finished a more complex module to control the hand that will be integrated later. We also started with the calibration of the left arm, but had to abort to let Ugo and Stephane show their demo. The routines for calibrating the arms were also shown in a separate demo.

Serena showed her 'clapping' behavior, and force detecting. Also some routines for exploring positions for inserting the legos.

We think it is possible to put the lego pieces together using visual servoing, but the force control is a much better solution. It is some sort of a 'peg in the hole' problem.


Wednesday 28.07.2010

We are considering changing our team name to: "The Calibrators!"

Last night, we managed to do autonomous grasping from the table. It required a lot of calibration:

  • arm encoders (for our forward kinematics)
  • cameras (for arToolkit)
  • hand/eye calibration (of course the previous two did not match)

Timelapse of the night before the demo. This is how the last details were done, in a hacking night. The tricky part was to make a module that learns a mapping between the forward kinematics and the position obtained from the cameras. In the video you can see the icub following his hand with the head, and during that we are learning the map.

Repository

Get the team repo like this:

 git clone gitosis@10.0.0.217:vvv10repo.git

For this to work, give Alexis your public ssh key.

Sunday 25.07.2010

Started closing the loop between marker positions and hand positions. In the first experiments we found some offsets that we hope to solve on Monday. Our code for moving the icub to watching positions, reaching, and the high-level state machine are still working.

The Artoolkit program detects reliably the position of our 4 markers in distances up to 40-50cm from the eyes.

We need a bit of extra light (thanks to Paul for the desk lamps).

Since we have two markers on each lego piece, we have a program that receives their position and reports the position of the lego piece even if we only see one marker, and an average if we see both. A speed/acceleration filter reduces false positives.

Looking at the lego pieces on the table:

4818200761_e3b067ec4a.jpg 4818201049_51ec79436e.jpg

We also checked the relative position between the forward kinematics of the arm, and the estimated position of a marker. After moving the arm to many positions, we discovered an error of +-1.5cm, picture of the experiment:

4830129644_fbb13d2f35.jpg

Saturday 24.07.2010

We have tried to extract the best quality images possible from the Dragonfly2 cameras on the iCub.

There are 3 limiting factors:

* Firewire bus bandwidth
* Ethernet bandwidth
* CPU on the PC104

1. Firewire bus bandwidth:

The PC104 has one 1394a (400Mbps) bus. This has to be divided between two cameras.

 An example of a bandwidth calculation:
 # Firewire Bandwidth = xresolution * yresolution * bytes/pixel * frames/sec * bits/byte * Mbits / bits
 # Firewire Bandwidth = 640 * 480 * 3 * 30 * 8 / 1024 / 1024 = 210.9375 Mbits/sec

So, if you are working at 640x480 RGB8 (3bytes/pixel) @ 30fps, then you need 210Mbps per camera That would be 420Mbps for two cameras, that exceeds the maximum.

Another option is to use YUV422, that uses 1.5bytes per pixel. We have added a videtype to the icubmoddev executable to support it. Then you need 105Mbps per camera.

Even better is to read RAW data from the sensor, and apply the debayer pattern on the PC104.

The Dragonfly2 cameras have a 12bit ADC, so they support RAW8 and RAW16 modes. RAW8 sends 8bits per pixel, and RAW16 16bits per pixel. We have added the RAW16 mode to icubmoddev to support RAW16, and debayering on the PC104 computer.

The RAW16 mode should get the highest possible quality out of the cameras, but you need to adjust brightness, contrast, white balance, etc, on your software. (The normal adjustments in the framegrabber program tell the camera how to apply the debayer pattern and then transfer RGB data or YUV data).

The bandwidth needed for these RAW modes is:

RAW8:  640 * 480 * 1 * 30 * 8 / 1024 / 1024 = 70.3 Mbits/sec per camera
RAW16: 640 * 480 * 2 * 30 * 8 / 1024 / 1024 = 140.6 Mbits/sec per camera 

The dragonfly2 camera supports up to 60fps. The bandwidth of the firewire bus is enough for 1 camera at 60fps at 640x480 using RAW16, and for two cameras at 60fps 640x480 using RAW8.


2. Ethernet bandwidth

The iCub has one gigabit ethernet connection to the PC104 computer, so we have 1000Mbps available.

icubmoddev converts the images to RGB8 before sending them on the network using yarp, so you need 3 bytes/pixel always.

RGB8: 640 * 480 * 3 * 30 * 8 / 1024 / 1024 = 210.93 Mbits/sec per camera

For two cameras at 640x480 @ 30fps we are sending 421Mbits/sec. Serializing all this data uses a lot of CPU power!


3. CPU consumption

The most efficient camera mode from the CPU point of view is RGB8 since icubmoddev only does a memcpy from the camera buffer to the internal buffer.

The YUV mode does a mode conversion (YUV422->RGB8) before doing the memcpy.

The RAW8 and RAW16 modes apply a debayer pattern before doing the memcpy.


The mode we leave:

We were looking for a nice balance between all of these limitations, so we set the cameras like this:

Resolution: 640x480   Format: Format7 0   Color-coding: RGB8

We leave the framerate setting empty, because in Format7, the framerate is automatically selected to use the maximum bandwidth. The package data size is set to 44%, so that even including a little overhead, two cameras share almost all the bandwidth. The resulting framerate is 19fps.

 Framerate = 19fps

The CPU usage for the icubmoddev process for each camera is ~ 30-40%.

The Network bandwidth is: 133.6Mbps per camera.

So we are using 267Mbps for both cameras, or 26.7% of the gigabit ethernet bandwidth.


Camera calibration parameters

We also adjusted the camera calibration parameters in the XML file used by the 'camcalib' program. We recommend people to use the ports:

/icub/camcalib/left/out
/icub/camcalib/right/out

Since the camcalib software runs on another computer, we avoid having multiple connections getting camera images from the robot.

Wishlist

What we would like best is that the PC104 computer receives RAW8 (or RAW16) data and sends it as it is on the network, thus reducing network bandwidth and the CPU cost for serializing the images to put them on the network. The clients should then apply the debayer function on their own. (Or this could be done transparently on the FrameGrabber interface, as suggested by Lorenzo.


Thrusday 22.10.2010

Finally managed to calibrate the cameras of the iCub at 640x480 and get precise values
from the artoolkit program. More light is good! A flat CalTab is important! Vvv10_camera_calibration

Also got the kinematic tree of the iCub loaded, managed to detect the markers at a useful distance
and tested the marker to world coordinate transformations.

The iCub looking for lego pieces:
Vvv10 icub looks at lego.jpg

Here is what it looks like in rviz: Vvv10 Kintree marker.jpg

And an example rectified image from the camera. The marker in the hand was detected well. Vvv10 Marker on hand2.jpg


Images

Many images related to the team are on the flickr gallery: [1]