DemoAff

From Wiki for iCub and Friends
Jump to navigation Jump to search

This page summarizes the steps necessary to install and run the affordances demo.

Programmers: Luis Montesano and Manuel Lopes

This demo shows the funcionalities developed under WP4 and present in: Learning Object Affordances: From Sensory Motor Maps to Imitation. Luis Montesano, Manuel Lopes, Alexandre Bernardino, and José Santos-Victor. IEEE Transactions on Robotics, vol. 24(1), 2008.

This demo is being developed and tested under Linux ( Ubuntu - 2.6.22-14-server )

Library dependencies:

ARToolKit
this seems to have a bug under Linux, works under Windows XP (32bit)
sudo apt-get install glutg3
sudo apt-get install glutg3-dev
sudo apt-get install libxmu-dev
download artoolkit
http://www.hitl.washington.edu/artoolkit/documentation/usersetup.htm#comp_linux
install artoolkit
tar zxvf ARToolKit-2.71.tgz
cd {ARToolKit}
./Configure
make
copy the AR directory under include to /usr/include
and copy the libs to /usr/lib
PNL
works only in Linux 32bit and Windows XP with VC6.0
download the library source code from https://sourceforge.net/projects/openpnl/
unzip it
chmod 777 configure.gcc
configure.gcc
make 
make install
OpenCV

Modules dependencies:

artoolkittracker - a generic module to track markers based on ARToolKit
camshiftplus     - the opencv camshift tracker method that also gives information about the contour of objects
camCalib         - to have calibrated images
iCubInterface2

Running the demo:

  1. Put the three markers on the table. One marker corresponds to the location where people perform the demo, and two others correspond to the positions where objects will be presented to the robot to ``imitate the demonstration.
  2. start drivers and modules:
    1. image grabber
    2. motor interface
    3. artoolkittracker
    4. camshiftplus
  3. do the connection among modules
    1. grabber to camshiftplus
    2. grabber to artoolkittracker
    3. camshiftplus to demoAff
    4. artoolkittracker to demoAff
    5. demoAff to iCubInterface
  4. Put an object in the demonstration place.
  5. Perform the demonstration. For now we consider two actions tap and grasp. Note that the start of the action is detected by measuring a change of area of the object of 10%, so no fast lateral taps.
  6. Put one object in each imitation place.
  7. The robot will look at both an select and action and an object to imitate the observed action and effect.
  8. The robot performs and action and goes to 3.


--Macl 20:23, 4 June 2008 (CEST)