Difference between revisions of "VVV10 iFumble"

From Wiki for iCub and Friends
Jump to: navigation, search
Line 3: Line 3:
 
It will infer an object from its appearance. Then it will learn how that appearance predicts how the object responds to actions directed towards that object.
 
It will infer an object from its appearance. Then it will learn how that appearance predicts how the object responds to actions directed towards that object.
  
What it will learn is a mapping from object and action to consequence. We propose to represent the approximately 10d object, action, state mapping using and modifying the [http://eris.liralab.it/iCub/dox /html/classMathLib_1_1GaussianMixture.html#a5930a72b2e0bc878f79a6c8280c7093a GMM library] already in iCub. For motor actions we modify the [http://eris.liralab.it/iCub/dox/html/group__affActionPrimitives.html Action Primitives library].
+
What it will learn is a mapping from object and action to consequence. We propose to represent the approximately 10d object, action, state mapping using and modifying the [http://eris.liralab.it/iCub/dox/html/classMathLib_1_1GaussianMixture.html GMM library] already in iCub. For motor actions we modify the [http://eris.liralab.it/iCub/dox/html/group__affActionPrimitives.html Action Primitives library].
  
 
As a final demo, it will play "golf" with the object to get it to a target location - hopefully it will do this at above random capability after learning.
 
As a final demo, it will play "golf" with the object to get it to a target location - hopefully it will do this at above random capability after learning.

Revision as of 20:03, 23 July 2010

Our robot will learn "affordances", sort of.

It will infer an object from its appearance. Then it will learn how that appearance predicts how the object responds to actions directed towards that object.

What it will learn is a mapping from object and action to consequence. We propose to represent the approximately 10d object, action, state mapping using and modifying the GMM library already in iCub. For motor actions we modify the Action Primitives library.

As a final demo, it will play "golf" with the object to get it to a target location - hopefully it will do this at above random capability after learning.

Interfaces

Vision interface

TBD

Poking interface

Learning interface

Controller