Difference between revisions of "VVV12/EFAA"

From Wiki for iCub and Friends
Jump to: navigation, search
(Short Term Projects)
(Directions to investigate:)
Line 52: Line 52:
== Directions to investigate: ==
== Directions to investigate: ==
=== 3DObjectRecognition (IIT) ===
It explores an object from different point of views, then it retrieves the point cloud in 3D of the object, it estimates the minimum bounding box that contains the point cloud, and it shows the point cloud in 3D and the bounding box. It also provides the estimated center, size and pose of the minimum bounding box. This module is currently in iCub/contrib/src/3DObjectReconstruction.
*Input: a bottle with the following commands: 3Drec name, where \name" is the name of the object that it is wanted to reconstruct.
*Output: a bottle containing the following information: center of the bounding box, corner-points of the bounding box, size of the bounding box and rotation matrix indicating the pose of the object with respect to the root frame of the robot.
*Plug with hapticShapeRecog to help ?
=== gestureRecognitionModule (IIT)===
It recognizes gestures belonging to a given set of models. It is currently in iCub/contrib/src/demoGestureRecognition/src/gestureRecognitionModule.
*Input: Depth+RGB+joints streams from Kinect.
*Output: ID of the recognized action.
*Pointing the direction of a distant object.
=== awareTouch (Sheffield)===
Signal in the OPC when touch (right or left)
*Input: tactile sensor
*Output: right / left
*At the beginning, the Human come from side to touch the arm of the robot ! turn head to this direction
=== hapticShapeRecog (Sheffield)===
Recog shape of the object grasped
*Input: tactile sensor
*Output: id object
*When the Human give the object to the robot, it can detect if it is the good one (ball instead of square...).
=== autobiographicMemory (INSERM)===
Store speech and action in a DataBase and reasoning
*Input: SQL query (insert / select)
*Output: result of SQL query or high level representation (action, speech, arguments...)
*Remember human last/favorite game/team...
=== rockPaperScisors game (UPF)===
rock / paper / scisors game with eyes, using RAD.
*Input: Camera
=== driveSystem (UPF)===
Update values of drives (energy, curiosity...)
*Input: OPC : flag : interaction, object present, human ?
*Output: OPC : values of drive
*High energy and curiosity; will decrease with games played until the robot stop to sleep
=== faceDetection[Recog] (UPF)===  
=== faceDetection[Recog] (UPF)===  
Line 111: Line 68:
*Input: Kinect (human) + ReacTable (object position)
*Input: Kinect (human) + ReacTable (object position)
*Output: Action currently executed + confidence, parameters…
*Output: Action currently executed + confidence, parameters…
=== iKart (IIT ? ) ===
The iCub should be able to move away from the table, then come again and recalibrate so the motor functions (pmp) precision is still acceptable.
*Input: iKart output (odometry, ROS-SLAM, etc.)
*Output: Recalibration / translation / rotation of existing reference frames

Revision as of 01:16, 23 July 2012

VVV2012 EFAA Project

EFAA is a european project (http://efaa.upf.edu/) using the iCub, the Reactable and the iKart in order to develop human robot interaction. This page will sumarize our work during the timelapse of the summer school.

Integration objectives include: iCub mobility with the iKart, and re-calibration with the ReacTable after iKart mobility :)

Short Term Projects


  • Please attempt to the navigation tutorial of Marco
  • Integrate that into the EFAA architecture somehow

Skin sensing --> Emotions

  • Preliminary demo working: iCub is "off", skin information is fed into OPC by awareTouch (Hector), read by IQR (Vicky) wich compute emotions of the robot to both wake up the robot, send them to the OPC and update the facial expression (Stephane). Congratulations !
  • Next steps:
    • classify the type of touch (poke, grab, caress)
    • turn the iCub in a caressoholic !

Calibration / Motor test with the reactable

  • Calibration procedure has been debugged / improved.
  • Next Steps:
    • Tutorial on calibration (monday)
    • Automatic recalibration after a motion

Gesture Recognition

  • Recognized gestures are commited to the OPC as relations (Ilaria)
  • A gesture-speech based game is achieved using RAD (Maxime)
  • Next steps:
    • Gestures could influence the robot emotions ? (Vicky?)

Tactile Shape Recognition

  • Objects could be categorized by their physical shape, sensed through hand encoders
    • Uriel what is the status of this ?

Proactive Behavior

  • Basic information is shared among modules through the OPC client (iCub already has some believes & emotions populated)
    • We can go on and try to come up with a "drive based proactive demo"


Overall architecture

The modules, what they do, how to use them.

Current calibration iCub / Reactable (and reference frame management in general)

How to align iCub / reactable referentials and more generally how to align 2 different referentials using Ugo's calibration library.

efaaHelpers library and OPC format specification

How to access the OPC through it client and benefit from various classes to represent entities known by the robot.

  • The architecture
  • Efaa Architecture.png

Directions to investigate:

faceDetection[Recog] (UPF)

detect face [recog face]

  • Input:?
  • Output: x, y, z of the face [name of human]

decisionalModule – DAC (UPF/INSERM)

compute appropriate decision at multiple levels (DAC layers)

  • Input: read the OPC (Working Memory) content
  • Output: control robot action (triggers attentionSelector, pmpActionModule, emotionsInterface, speech)
  • Change the actual decisional system (finite state machine) to a more biologically plausible system based on needs.


Human action prediction

  • Input: Kinect (human) + ReacTable (object position)
  • Output: Action currently executed + confidence, parameters…