This website uses browsing/session and functional cookies to ensure you get the best experience. Learn More

VVV12/EFAA

From Wiki for iCub and Friends
Jump to: navigation, search
DoThings.jpg

Contents

VVV2012 EFAA Project

EFAA is a european project (http://efaa.upf.edu/) using the iCub, the Reactable and the iKart in order to develop human robot interaction. This page will sumarize our work during the timelapse of the summer school.

Integration objectives include: iCub mobility with the iKart, and re-calibration with the ReacTable after iKart mobility :)


Participants

  • Maxime Petit (INSERM - Lyon - France)
  • Stéphane Lallée (INSERM - Lyon - France)
  • Grégoire Pointeau (INSERM - Lyon - France)
  • Your name here ?

Short Term Projects

Navigation

  • Please attempt to the navigation tutorial of Marco
  • Integrate that into the EFAA architecture somehow
    • 1) Find the transformation between Map coordinates & iCub initial reference frame (front of the table)
    • 2) Add an object to the OPC that is far away from the table
    • 3) Change the call to PMP to:
      • 1) Check if the target is in range
      • 2) If not express this target into map coordinates
      • 3) Navigate to the target
      • 4) Send the command to PMP
    • 4) Come back to the table and drop the object on the table

Skin sensing --> Emotions

  • Preliminary demo working: iCub is "off", skin information is fed into OPC by awareTouch (Hector), read by IQR (Vicky) wich compute emotions of the robot to both wake up the robot, send them to the OPC and update the facial expression (Stephane). Congratulations !
  • Next steps:
    • classify the type of touch (poke, grab, caress)
    • turn the iCub in a caressoholic !

Statut : RAD retrieve the emotion "state" which will be at 0 at first (sleep mode) and become 1 after a touch => allow to continue interaction

Calibration / Motor test with the reactable

  • Calibration procedure has been debugged / improved.
  • Next Steps:
    • Tutorial on calibration (monday)
    • Automatic recalibration after a motion

AutobiographicMemory

Differents modules existing :

  • interactionManager (need POSTGRSQL): a SQL database with all the interaction of the robot. This module is a first version and will be upgraded. (Grégoire + Maxime)
  • opcEars : module doing some comparisons of the OPC at different momenst. This module provides some temporality and returns the consequence of an action. (Grégoire)
    • inputs : "snapshot name" : create a snapshot of the OPC stored at "name"
    • inputs : "diff name1 name2" : return a bottle with the difference between the 2 OPC states name1 and name2.

opcEars

  • add some temporality to the OPC [INSERM]
  • input : "snapshot + name" to create a snapshot of the OPC at a given time
  • input : "difference + name1 + name2" : Output : a Bottle with the differences betwwen the 2 states of the OPC

Gesture Recognition

  • Recognized gestures are commited to the OPC as relations (Ilaria)
  • A gesture-speech based game is achieved using RAD (Maxime)
  • Next steps:
    • Gestures could influence the robot emotions ? (Vicky?)

Statut : RAD spoken interface catching OPC relation in order to catch gesture from either Ilaria's or Kyuhwa's module

Tactile Shape Recognition

  • Objects could be categorized by their physical shape, sensed through hand encoders
    • Inputs: instance of each object
    • Output: object classified
  • The picture below shows the connections of the shapeRecognition module with the Supervisor and OPC modules

Diagram shaperecognition 4.png

  • Signals from the Supervisor module
    • collect <left_arm/right_arm> <label>
    • training
    • classify <left_arm/right_arm>
  • ShapeRecognition module sends OPC module the relation of the object classified, i.e. "iCub detects a small object"

Proactive Behavior

  • Basic information is shared among modules through the OPC client (iCub already has some believes & emotions populated)
    • We can go on and try to come up with a "drive based proactive demo"

Statut : RAD check if some behavior are allowed (sleep, work, play, socialize) and send the human go to hell if he asked something the icub don't want to do. Go to sleep automatically if it is the only thing to do

Tutorials

Overall architecture

The modules, what they do, how to use them.

Current calibration iCub / Reactable (and reference frame management in general)

How to align iCub / reactable referentials and more generally how to align 2 different referentials using Ugo's calibration library.

efaaHelpers library and OPC format specification

How to access the OPC through it client and benefit from various classes to represent entities known by the robot.

  • The architecture
  • Efaa Architecture.png


Directions to investigate:

faceDetection[Recog] (UPF)

detect face [recog face]

  • Input:?
  • Output: x, y, z of the face [name of human]

decisionalModule – DAC (UPF/INSERM)

compute appropriate decision at multiple levels (DAC layers)

  • Input: read the OPC (Working Memory) content
  • Output: control robot action (triggers attentionSelector, pmpActionModule, emotionsInterface, speech)
  • Change the actual decisional system (finite state machine) to a more biologically plausible system based on needs.

HAMMER (IMPERIAL)

Human action prediction

  • Input: Kinect (human) + ReacTable (object position)
  • Output: Action currently executed + confidence, parameters…
Personal tools
Namespaces

Variants
Actions
Navigation
Print/export
Toolbox