This website uses browsing/session and functional cookies to ensure you get the best experience. Learn More

VVV13 Tasks

From Wiki for iCub and Friends
Jump to: navigation, search

Contents


Task 1: graphical tools for ZMP and stability region visualizers

This module will be a graphical visualization tool for ZMP, balance etc. It will essentially be a 2D plot visualizing the feet (shape can be extracted from CAD drawings), the support polygon, the stability region, the ZMP, the COP, etc. It will integrate information provided by the dynamics modules, the skin information, etc.

Task 2: yarpPortMath module

This module will take two ports and streaming vectors (of numbers) and will compute simple mathematical operations, such as bilinear affine transformations, then stream the result on an additional port. Let us call the two vectors x and y, then the module will compute:

 z = Ax + By + x^T C y + d

and stream the result z on the output port. The matrices and vector A,B,C,d will be specified from an .XML or .ini file.

Task 3: linear system module (i.e. filtering)

This module should take a port and streaming vectors (of numbers) and use them as the input of a dynamical system. The dynamical system will be described in terms of either a SISO transfer function (e.g. ARMA) or in terms of a state space representation of a MIMO system. In the first case the module output will be streamed on a single port, containing the output. In the second case the module will stream the output in two separate ports, one representing the system output, the other representing the system state. Some work has already been done by Ugo Pattacini in the iCub ctrLib

Task 4: linear adaptive control module

This module will implement a strategy for the adaptive control of a linear system. Basic algorithm will be one the algorithms proposed in the book Applied Nonlinear Control by Jean-Jacques Slotine and Weiping Li. One possible easy task will be to implement the algorithm at page 404 of the book in a very simple case like the one addressed here, where the control of a motor is considered.

Task 5: linear system identification C/C++ library

This library should identify a linear system. The library should take pairs of u (input) and y (output) to identify the underlying linear system. There are available libraries which could be use to simplify the identification problem. One of them is SLICOT which is Fortran and therefore not ideal. A possibility is to create a C/C++ interface to the Fortran library, as suggested in here. Goal of this task is to investigate available libraries, select one of them and wrap in a new library.

Task 6: non-linear system identification module with kernel based methods

This module should uses kernel based methods such as the one implemented here to create a kernel based representation of a dynamical system. The module should take streaming couples of inputs u and outputs y and learn a predictor of the output y(t) given the history of input u(t) ... u(t-m) and outputs y(t-1) ... y(t-n).

Task 7: improving balance control

Currently balance controller is a simple proportional (P) controller. Goal of this task will be to implement a better balance controller which should be based on an identified version of the transfer function between the COM position, velocity and acceleration (input, u) and the resulting ZMP (output, y). A simple approach will be to identify the input-output transfer function and then trying to tune a controller to improve the current simple P controller.

Riccardo Spica;

Task 8: improving balance control with reinforcement learning

The balance controller is a first tentative to reduce the dimensionality of the balancing control problem from a 32 DOFs task (head, torso, arms and legs) to a simple 2DOF problem (the projection of the COM on the ground). Finding the correct COM movement to get an appropriate ZMP trajectory suitable for balancing might be challenging. Reinforcement learning can be a way autonomously explore fast COM trajectories leading to stable ZMP trajectories while performing challenging tasks (e.g. kicking a ball). COM trajectories can be parameterized with a sufficiently small number of DOFs. Continuos reinforcement can be given on the basis of the ZMP distance from the center of the foot. Depending on the expertise of the students a suitable reinforcement library (e.g. the RL toolbox) will be chosen.

Roberto Calandra; Erwan Renaudo;

Task 9: implement iCub simulator on Gazebo

Gazebo is a system simulator developed by OSRF. This tasks consists in implementing an iCub simulator in Gazebo, exploiting its modular software architecture.

Task 10: Tactile recognition

The robot is equipped with a system of tactile sensors on the arms and tors. Develop a module that uses the feedback from these sensors to recognize:

  • tactile images (objects of different shapes pressed on the skin)
  • tactile gestures (similar to the previous problem but including time)

Qiang Li; Herke van Hoof; Nathaniel Rose;

Task 11: Contact detection with accelerometers

Use the accelerometers on the cover of the iCub to detect contacts. Accelerometers measure gravity acceleration, acceleration due to motors motion and acceleration due to contact: to detect contacts we have to subtract from the accelerometer measurements the acceleration due to gravity and to motors motion, so that only the part due to contacts is left.

Andrea Del Prete;

Task 12: Tactile servoing

Drive the arm using the output of the tactile system.

Jimmy Baraglia; Qiang Li; Giuseppe Cotugno;

Task 13: Estimation and tuning of algorithms for numerical differentiation

Online computation of numerical first and second derivatives of joint angles is fundamental for control and estimation. Countless algorithms exist for this purpose, some of which are already implemented in the iCub software (e.g. http://wiki.icub.org/iCub/main/dox/html/group__adaptWinPolyEstimator.html), whereas others may be readily implemented (e.g. Kalman smoother). Now that accelerometers and gyroscopes are mounted on the iCub, we could use these sensor measurements as ground truth, to compare and tune different methods for computing numerical derivatives.

Andrea Del Prete;

Task 14: Learning forward kinematics using ML

Develop a module that learns the forward kinematics of the robot. You can use a marker to detect the hand of the robot. The problem can be formulated in different ways:

  • learn the full model;
  • train a model that compensates for the error of the iCub arm and head kinematics;
  • a mix of the above.

The learnt model can be used to control the arm.

Lorenzo Jamone; Ugo Pattacini (supervising, i.e. part-time);

Task 15: Structure From Motion (SFM) and Robust 3D Map Estimation

Design, develop and implement robust camera position estimation for the computation of a robust disparity map. Subtasks:

  • feature extraction & robust matching in real-time
  • epipolar geometry computation (Fundamental/Essential Matrix)
  • camera parameter estimation
  • bundle adjustment

Sean Ryan Fanello; Vadim Tikhanoff; Riccardo Spica;

Task 16: Extension of iKart SLAM system for a 3D enviroment

  • Transformation of 3d data coming from the disparity module (iCub eyes reference frame) to the ikart reference frame using iKin library. (easy/medium)
  • Integrating 3d data into the iKart 2D map (using off-the-shelf libraries for 3D slam) (medium/advanced) -> YARP/ROS interaction

Marco Randazzo; Vadim Tikhanoff (part-time);

Task 17: Real-time control of hand/arm movements using a RGBD sensor

Develop a module that mimics in real-time a confederate's arm/hand shape on the iCub

Subtasks:

  • Retrieves the full articulation and position of the hand using the "3D Hand Tracking" library (http://cvrlcode.ics.forth.gr/handtracking/)
  • Creates a RFModule that mimics the hand/arm articulation on the robot

Guillaume Gibert;

Task 18: Driving an enhanced iCub head with jaw and lips control for speech

A new iCub head version will be available at this summer school, including a mobile jaw and 4 degrees of freedom for lips. We will try to get it controled through captured speech data and synchronized with the corresponding audio recording.

Frédéric Elisei;

Task 19: Learning motor affordances in grasping

The task will be to automatically form behavior primitives such as grasp, hit, carry-object and drop and then use one of them to learn affordances of more complex objects. One possible approach could be find in this paper.

Beata Grzyb;

Task 20: EFAA

This group is related to the EFAA project, where the robot will be interacting with a human partner over a tactile table (Reactable). Due to the amount of existing modules within the architecture it may be hard for "outsiders" to join, however if you want to make the robot take advantage of a tactile display you can propose any idea.

Stéphane Lallée; Ugo Pattacini;Vicky Vouloutsi;Grégoire Pointeau;

Task 21: Visual/Tactile grasp

One of the main problem in classical grasping is the camera occlusion due to the hand during and after the pre-grasping phase. Using tactile sensors on the fingers and on the hand could be possible to haptically explore the object during the grasping phase also in case of partial occlusion. This will allow for more dexterous and precise grasping.

Alessio Rocchi; Enrico Mingo;

Task 22: Tactile Haptic refinement

When working in unstructured and unknown environments we rely on sensors data to acquire informations and act according to them. 3D information of the environment are usually provided by stereo cameras, Kinect-like sensors or lasers. The reconstruction provided by these sensors could contain artifacts or to be incomplete due to external factors like bad light conditions, occlusions, and so on. Tactile sensors on hand and fingers can integrate 3D information of the considered objects. A possible approach to this problem could be based on Graph Optimization.

Enrico Mingo; Alessio Rocchi;

Task 23: Software interfaces for codyco

This task is about the definition of simple interfaces for the CODYCO software. Interfaces are necessary to write reusable software that can be compliant with existing and new control, identification, learning libraries, and of course to access different robots.

Serena Ivaldi;

Task 24: Learning to reach an object while staying balanced

Use reinforcement learning to extend robot's arm range in a multicontact task of reaching. The goal is to allow the robot to touch an object on a table that is initially out of his range. He has to find the good position to place his third contact on the table that allow his second arm to touch the goal object. He must stay balanced in this multicontact posture.

Erwan Renaudo;

Task 25: Implement device driver for iCub's inertial sensor using new driver from vendor

This is a purely software task. xsense has released a new library for the inertial unit mounted on board the iCub. In this task you can experiment with it a update the existing driver.

Task 26: Kinaesthetic Teaching and Synergies

  • The scope of the task is to perform kinaesthetic demonstrations on the iCub in order to collect data for future processing to extract grasping synergies.
  • Second part of the task is to validate the grasping synergies collected from kinaesthetic experiments on grasping a cuboid, to see how well it performs and how much it scales on different objects

Giuseppe Cotugno;

Personal tools
Namespaces

Variants
Actions
Navigation
Print/export
Toolbox