Difference between revisions of "VVV10 iFumble"

From Wiki for iCub and Friends
Jump to: navigation, search
(Poking interface)
(Poking interface)
Line 16: Line 16:
  
 
===Poking interface===
 
===Poking interface===
Using new-made ActionPrimitives to fiddle, fumble, and poke around a little bit! Quote: "Fee-fi-fo-fum"
+
Component motto: "Fee-fi-fo-fum"
  
 +
Using new-made ActionPrimitives to fiddle, fumble, and poke around a little bit!
 
  Tentative arguments of ActionPrimitivesLayer1::poke(...), which pokes the given target (combined action):
 
  Tentative arguments of ActionPrimitivesLayer1::poke(...), which pokes the given target (combined action):
 
  * '''x''' the 3-d target position [m].  
 
  * '''x''' the 3-d target position [m].  
  * '''od''' the hand distance to target used initially
+
  * '''od''' the hand distance to target used initially [m].
 
  * '''or''' the 4-d hand orientation used initially (given in axis-angle representation: ax ay az angle in rad).
 
  * '''or''' the 4-d hand orientation used initially (given in axis-angle representation: ax ay az angle in rad).
 
  * '''v''' the reference velocity [m/s].
 
  * '''v''' the reference velocity [m/s].
 +
* '''fr''' the final 4-d hand orientation (given in axis-angle representation: ax ay az angle in rad).
 +
For now, we're thinking the final position will be hard-coded as the object original position.
 +
 +
(more stuff here)
 +
 +
Next, we'll be modifying ActionPrimitivesExample and exposing the interface!
 +
(yuju!)
  
 
===Learning interface===
 
===Learning interface===

Revision as of 12:38, 24 July 2010

Our robot will learn "affordances", sort of.

It will infer an object from its appearance. Then it will learn how that appearance predicts how the object responds to actions directed towards that object.

What it will learn is a mapping from object and action to consequence. We propose to represent the approximately 10d object, action, state mapping using and modifying the GMM library already in iCub. GMMs have the advantage of quickly learning high dimensional non-linear mappings. For motor actions we modify the Action Primitives library.

Vision, state estimation and data association will be done with IQR and some other bits (guys?).

As a final demo, it will play "golf" with the object to get it to a target location - hopefully it will do this at above random capability after learning with the object for a bit.

Interfaces

Vision interface

TBD

Poking interface

Component motto: "Fee-fi-fo-fum"

Using new-made ActionPrimitives to fiddle, fumble, and poke around a little bit!

Tentative arguments of ActionPrimitivesLayer1::poke(...), which pokes the given target (combined action):
* x the 3-d target position [m]. 
* od the hand distance to target used initially [m].
* or the 4-d hand orientation used initially (given in axis-angle representation: ax ay az angle in rad).
* v the reference velocity [m/s].
* fr the final 4-d hand orientation (given in axis-angle representation: ax ay az angle in rad).
For now, we're thinking the final position will be hard-coded as the object original position.
(more stuff here)

Next, we'll be modifying ActionPrimitivesExample and exposing the interface!

(yuju!)

Learning interface

This is the Learning interface, not to be confused with the trajectory learner and replayer for the action regognition, though derived from the GMM part of that.

Interface here shortly. Add data points, do the learning, do inference by sending partial datapoints and getting information about the resulting distribution back.

Controller