VVV12/EFAA

From Wiki for iCub and Friends
Revision as of 16:36, 23 July 2012 by Stephane.lallee (talk | contribs) (Participants)
Jump to: navigation, search
DoThings.jpg

VVV2012 EFAA Project

EFAA is a european project (http://efaa.upf.edu/) using the iCub, the Reactable and the iKart in order to develop human robot interaction. This page will sumarize our work during the timelapse of the summer school.

Integration objectives include: iCub mobility with the iKart, and re-calibration with the ReacTable after iKart mobility :)


Participants

  • Maxime Petit (INSERM - Lyon - France)
  • Stéphane Lallée (INSERM - Lyon - France)
  • Grégoire Pointeau (INSERM - Lyon - France)
  • Your name here ?

Short Term Projects

Navigation

  • Please attempt to the navigation tutorial of Marco
  • Integrate that into the EFAA architecture somehow

Skin sensing --> Emotions

  • Preliminary demo working: iCub is "off", skin information is fed into OPC by awareTouch (Hector), read by IQR (Vicky) wich compute emotions of the robot to both wake up the robot, send them to the OPC and update the facial expression (Stephane). Congratulations !
  • Next steps:
    • classify the type of touch (poke, grab, caress)
    • turn the iCub in a caressoholic !

Calibration / Motor test with the reactable

  • Calibration procedure has been debugged / improved.
  • Next Steps:
    • Tutorial on calibration (monday)
    • Automatic recalibration after a motion

AutobiographicMemory

Differents modules existing :

  • interactionManager (need POSTGRSQL): a SQL database with all the interaction of the robot. This module is a first version and will be upgraded. (Grégoire + Maxime)
  • opcEars : module doing some comparisons of the OPC at different momenst. This module provides some temporality and returns the consequence of an action. (Grégoire)
    • inputs : "snapshot name" : create a snapshot of the OPC stored at "name"
    • inputs : "diff name1 name2" : return a bottle with the difference between the 2 OPC states name1 and name2.

Gesture Recognition

  • Recognized gestures are commited to the OPC as relations (Ilaria)
  • A gesture-speech based game is achieved using RAD (Maxime)
  • Next steps:
    • Gestures could influence the robot emotions ? (Vicky?)

Tactile Shape Recognition

  • Objects could be categorized by their physical shape, sensed through hand encoders
    • Uriel what is the status of this ?

Proactive Behavior

  • Basic information is shared among modules through the OPC client (iCub already has some believes & emotions populated)
    • We can go on and try to come up with a "drive based proactive demo"

Tutorials

Overall architecture

The modules, what they do, how to use them.

Current calibration iCub / Reactable (and reference frame management in general)

How to align iCub / reactable referentials and more generally how to align 2 different referentials using Ugo's calibration library.

efaaHelpers library and OPC format specification

How to access the OPC through it client and benefit from various classes to represent entities known by the robot.

  • The architecture
  • Efaa Architecture.png


Directions to investigate:

faceDetection[Recog] (UPF)

detect face [recog face]

  • Input:?
  • Output: x, y, z of the face [name of human]

decisionalModule – DAC (UPF/INSERM)

compute appropriate decision at multiple levels (DAC layers)

  • Input: read the OPC (Working Memory) content
  • Output: control robot action (triggers attentionSelector, pmpActionModule, emotionsInterface, speech)
  • Change the actual decisional system (finite state machine) to a more biologically plausible system based on needs.

HAMMER (IMPERIAL)

Human action prediction

  • Input: Kinect (human) + ReacTable (object position)
  • Output: Action currently executed + confidence, parameters…