Difference between revisions of "VVV12/EFAA"

From Wiki for iCub and Friends
Jump to: navigation, search
(Directions to investigate:)
Line 1: Line 1:
= VVV2012 EFAA Project =
= VVV2012 EFAA Project =

Revision as of 01:17, 23 July 2012


VVV2012 EFAA Project

EFAA is a european project (http://efaa.upf.edu/) using the iCub, the Reactable and the iKart in order to develop human robot interaction. This page will sumarize our work during the timelapse of the summer school.

Integration objectives include: iCub mobility with the iKart, and re-calibration with the ReacTable after iKart mobility :)

Short Term Projects


  • Please attempt to the navigation tutorial of Marco
  • Integrate that into the EFAA architecture somehow

Skin sensing --> Emotions

  • Preliminary demo working: iCub is "off", skin information is fed into OPC by awareTouch (Hector), read by IQR (Vicky) wich compute emotions of the robot to both wake up the robot, send them to the OPC and update the facial expression (Stephane). Congratulations !
  • Next steps:
    • classify the type of touch (poke, grab, caress)
    • turn the iCub in a caressoholic !

Calibration / Motor test with the reactable

  • Calibration procedure has been debugged / improved.
  • Next Steps:
    • Tutorial on calibration (monday)
    • Automatic recalibration after a motion

Gesture Recognition

  • Recognized gestures are commited to the OPC as relations (Ilaria)
  • A gesture-speech based game is achieved using RAD (Maxime)
  • Next steps:
    • Gestures could influence the robot emotions ? (Vicky?)

Tactile Shape Recognition

  • Objects could be categorized by their physical shape, sensed through hand encoders
    • Uriel what is the status of this ?

Proactive Behavior

  • Basic information is shared among modules through the OPC client (iCub already has some believes & emotions populated)
    • We can go on and try to come up with a "drive based proactive demo"


Overall architecture

The modules, what they do, how to use them.

Current calibration iCub / Reactable (and reference frame management in general)

How to align iCub / reactable referentials and more generally how to align 2 different referentials using Ugo's calibration library.

efaaHelpers library and OPC format specification

How to access the OPC through it client and benefit from various classes to represent entities known by the robot.

  • The architecture
  • Efaa Architecture.png

Directions to investigate:

faceDetection[Recog] (UPF)

detect face [recog face]

  • Input:?
  • Output: x, y, z of the face [name of human]

decisionalModule – DAC (UPF/INSERM)

compute appropriate decision at multiple levels (DAC layers)

  • Input: read the OPC (Working Memory) content
  • Output: control robot action (triggers attentionSelector, pmpActionModule, emotionsInterface, speech)
  • Change the actual decisional system (finite state machine) to a more biologically plausible system based on needs.


Human action prediction

  • Input: Kinect (human) + ReacTable (object position)
  • Output: Action currently executed + confidence, parameters…