Projects vvv2014

From Wiki for iCub and Friends
Revision as of 10:00, 26 July 2014 by VVV14 (talk | contribs)
Jump to: navigation, search

Please populate this page with the name of the project, the name of the people that participate in the project and a short description of what the project aims to do and what individual people are planning to do within the group. A group can be made by an individual person or more.

It is important that everybody works towards something, no matter how simple or complex.

Wysiwyd EU Project (What you say is what you did)

See here for a complete list of tasks.

People:

  • Stephane, Matej: sensorimotor contingencies using SOMs that match other SOMs (e.g. see subtask 2.5 here)
  • Martina: online predictions of motion (see subtask 2.3)
  • ...

Goal of the project:

...

Exploration of tools and objects affordances (in the context of Poeticon++)

People: Afonso, Lorenzo, Tanis, Vadim

Goal: Perform a set of actions on objects using tools, recording visual descriptors of tools, objects, and effects of the action.

Incremental Learning applications on iCub using the GURLS++ package

The main goals of the project are the following:

  1. Implement a numerically stable incremental RLS update rule (cholupdate) in the GURLS++ framework. The functionality has already been tested in MATLAB, replicating the work by Gijsberts et al.: ‘Incremental Learning of Robot Dynamics using Random Features’
  2. Design and develop a simple prototypical YARP module to serve as an interface with the robot
  3. Run an on-line inverse dynamics learner on the real robot


Resources:

Optimal control for COM motion

People involved: Yue, thanks to Francesco for the idea, Silvio and Daniele for the dynamics explanation and help

The project is about task 1.10. The objective is to use optimal control to generate optimized reference control inputs (torques) for the iCub robot such that it performs some motion at the COM (like swing left right and keep balancing). To reach the goal forward dynamics is needed and a simple version with the Whole Body Interface is developed.
The optimal control problem is solved with the softwre package MUSCOD-II of University of Heidelberg.

Learning action representations from experience (in the context of the Xperience EU project)

Goal: create a simple demo of learning action representations on the iCub. See here for a complete list of tasks.

People:

  • Vadim: generate symbolic action/state descriptions from the iCub database (subtask 6.1)
  • Kira: learn planning representations from the action/state descriptions offline (subtask 6.2) and online (subtask 6.3)

Eye Tracking for Human Robot Interaction

People involved: Oskar

Face detection, face tracking, feature extraction, gaze tracking.

Exploration of tools and objects affordances (in the context of Poeticon++)

People: Afonso, Lorenzo, Tanis, Vadim

Goal: Perform a set of actions on objects using tools, recording visual descriptors of tools, objects, and effects of the action.

YARP/ROS Integration with MoveIt!

People involved: Miguel

Be able to plan the arm trajectories of the iCub on MoveIt, and, to populate the environment in ROS with the info from the iCub stereo vision. Manage to get Vizzy's (a robot from my lab) controlboards simulated on Gazebo, using the previous work on iCub/Gazebo.