VVV08 Demos

From Wiki for iCub and Friends
Jump to: navigation, search

Add pictures / screen-captures / links for your demo here.

Reaching works!!!

After fixing the DH-parameters we could finaly reach with the right-arm and the torso. Some movies:

Reaching up Reaching down Reach up to the other side
Reachup.png Reachdown.png Reachcross.png
[reachup movie] [reachdown movie] [reachcross movie]
  • And also using both hands. Here is a video of simulated iCub trying to catch a ball


Both hands movie

Little Yarpers grow up!

So here we are, we went through this summer school basically starting from nothing and reaching... not much maybe, but still a lot for us! We started from the tutorial code Paul showed us, we worked on it for a while introducing a face detection algorithm (using the salience module of icub) to control the motors of the eyes with a velocity movement. One of the main limits we found was the heaviness of the algorithm... there are basically two modules working at the same time, one for the salience detection (in this case we were interested in faces) and one for the motor movement; since our pcs showed us they'd not like to run all the processes on just one of them, we had to split algorithms on different machines (i can swear i saw my pc telling me he'd burn himself alive if i'd not do so...), specially for the simulation part.

These are some pictures that show how the algorithm worked on the simulator (the robot is facing a screen in the virtual world on which he can see a webcam stream, what you see in the little box is a view from one of his cameras):

Face track.jpg modworld_c.jpg

And these are some videos that show how it looked on the read icub head: video1 video2 video3; i have to admit i'd never imagine that seeing your code running on a robot, seeing its eyes following your face, could be so breathtaking... those are still little steps, but they give lot of optimism for the future! Thanks everyone!

Guys, you forgot to mention that your code works great in detecting the Dalai Lama! :-)<br\> Dalai lama tracker 2.jpg

Playing Paper/Rock/Scissor with the iCub

1) Giacomo : The robot chooses one out of Paper, Rock and Scissor, and then shows its decision by moving its left arm accordingly.

At last he looks at the Human player, he detects the opponent's move and checks who's the winner (he also expresses its feelings after the match ends).

Paperrockscissor.png Paperrockscissor1.png Paperrockscissor2.png

2) Stephane & Jasmin : A all cheating demo based on speech recognition and text to speech using the CSLU Toolkit (free tool but windows based).

At the beginning the iCub don't know the rules of the game (stone>scissors>paper), but after a while he start to learn. (Here it's cheating, but it works fine with a tiny neural network, so the robot can learn new games... etc)

Next step is to teach the robot how to play Starcraft ^^' ...video coming...

Arm pointer

An human teacher uses his arm to point objects in a scene, and the iCub follows the arm's direction to better look at it.

Armpointing original.png Armpointing armdetected.png Armpointing linefit.png Armpointing blobdetect.png Armpointer nobackground.png Armpointer direction.png

iCub grasping a ball

The fruit of the collaboration between the grasping group and the visual attention group:

ICub grasp 08.jpg<br\> Check out this video on youtube!

Or download the full-length videos here:<br\> VVV08_grasping_0.avi 22MB<br\> VVV08_grasping_1.avi 02MB<br\> VVV08_grasping_2.avi 44MB<br\> VVV08_grasping_3.avi 07MB<br\> VVV08_grasping_4.avi 04MB<br\> VVV08_grasping_5.avi 36MB<br\> VVV08_grasping_6.avi 15MB<br\> If you have troubles playing these videos, try using the VLC player.

All these videos were acquired with the camera in the eye of iCub and rectified with the camcalib module.<br\>

You can find more videos here: iCub_videos

The Demo Team: (Thanks guys, it was so much fun!) Vvv08 grasping and attention0.jpg Vvv08 grasping and attention.jpg