VVV10 iFumble

From Wiki for iCub and Friends
Revision as of 15:56, 24 July 2010 by Vvv10 (talk | contribs) (Poking interface)
Jump to: navigation, search

Our robot will learn "affordances", sort of.

It will infer an object from its appearance. Then it will learn how that appearance predicts how the object responds to actions directed towards that object.

What it will learn is a mapping from object and action to consequence. We propose to represent the approximately 10d object, action, state mapping using and modifying the GMM library already in iCub. GMMs have the advantage of quickly learning high dimensional non-linear mappings. For motor actions we modify the Action Primitives library.

Vision, state estimation and data association will be done with IQR and some other bits (guys?).

As a final demo, it will play "golf" with the object to get it to a target location - hopefully it will do this at above random capability after learning with the object for a bit.

Interfaces

Vision interface

TBD

Poking interface

We'll be modifying ActionPrimitivesExample or CHRIS equivalent and exposing the interface to fiddle and fumble around!

Module name: iFumbly
Module port: /iFumbly/in

* x (double): target good ol' x [m].
* y (double): target good ol' y [m].
* d (double): the hand distance to target used initially [m].
* a (double): the hand orientation used initially (frame?) [degrees].
* v (double): the reference velocity [m/s].
* fa (double): the final hand orientation (abs/rel frame?) [degrees].
Module motto: "Fee-fi-fo-fum"

Learning interface

This is the Learning interface, not to be confused with the [Trajectory learner and replayer].

Interface here shortly. Add data points, do the learning, do inference by sending partial datapoints.

You can send this interface N-dimensional datapoints from which it will a probability density function over the N-dimensional space.

Then you can give it partial vectors and it will infer a probability distribution over the remaining dimensions. You can then ask for information about this inferred probability distribution - e.g. maximum likelihood, expectation. The interface may incorportate Gausssian Mixture Regression if it proves to be required.

Following behaviours should approximate final system:

  • add-data-point n-dim-vector
  • save-dataset name
  • load-dataset name
  • learn-distribution-from-data (use whole dataset for learning - number of gaussians required will be guessed)
  • learn-distribution-incremental (NOT USED: after some testing I determined the above approach will be more than sufficient for our problem)
  • save-distribution name
  • load-distribution name
  • infer-max-regression-point m-dim vector, vector dimension mask ---> return an (n-m)-dim vector. (this uses Gaussian Mixture Regression to find a partial vector corresponding to a given partial vector. Some idea of the variation can be given too since GMR produces a unimodal Gaussian distribution.

Not sure yet exactly what ports will expose.

Controller