Please populate this page with the name of the project, the name of the people that participate in the project and a short description of what the project aims to do and what individual people are planning to do within the group. A group can be made by an individual person or more.
It is important that everybody works towards something, no matter how simple or complex.
- 1 Wysiwyd EU Project (What you say is what you did)
- 2 Exploration of tools and objects affordances (in the context of Poeticon++)
- 3 Incremental Learning applications on iCub using the GURLS++ package
- 4 Optimal control for COM motion
- 5 Learning action representations from experience (in the context of the Xperience EU project)
- 6 Eye Tracking for Human Robot Interaction
- 7 YARP/ROS Integration with MoveIt!
- 8 3D reconstruction of an object from multiple views
- 9 Whole body related tasks
- 9.1 Floating base rigid body dynamics estimation
- 9.2 Balancing on a single foot
- 9.3 Graphical tools for ZMP and stability region visualizers
- 9.4 Matlab API to the whole body interface
- 9.5 Floating base forward dynamics
- 9.6 Integrating the ISIR whole-body controller with WBI and iCubGazebo
- 9.7 Arm-contact assisted reactive posture control using shoulder virtual-springs model
- 10 "Emergence" of Behaviors
- 11 Object Manipulation, grasping and exploration tasks
- 12 Independent Motion Detection
- 13 Gesture/action recognition
- 14 Object recognition and categorization
Wysiwyd EU Project (What you say is what you did)
- Stephane, Matej : sensorimotor contingencies using SOMs that match other SOMs (e.g. see subtask 2.5 here)
- Grégoire Pointeau.: learning through language. semantic bootstrapping. (see subtask 2.2). Detecting the object of focus, and through language associate several name to one object, or several objects to one name, and learning to use it according to the context.
- Martina : online predictions of motion (see subtask 2.3)
- Maxime : Reactable game for Semantic Bootstrapping (see subtask 2.3)
Goal of the project:
Exploration of tools and objects affordances (in the context of Poeticon++)
People: Afonso, Lorenzo, Tanis, Vadim
Goal: Perform a set of actions on objects using tools, recording visual descriptors of tools, objects, and effects of the action.
Incremental Learning applications on iCub using the GURLS++ package
The main goals of the project are the following:
- Implement a numerically stable incremental RLS update rule (cholupdate) in the GURLS++ framework. The functionality has already been tested in MATLAB, replicating the work by Gijsberts et al.: ‘Incremental Learning of Robot Dynamics using Random Features’
- Design and develop a simple prototypical YARP module to serve as an interface with the robot
- Run an on-line inverse dynamics learner on the real robot
- The GURLS++ package (see http://lcsl.github.io/GURLS/index.html)
- Previous work on non-parametric learning of the inverse dynamics of the iCub arm by Arjan Gijsberts (see http://www.iit.it/images/images/icub-facility/docs/theses/gijsberts.pdf)
Optimal control for COM motion
People involved: Yue, thanks to Francesco for the idea, Silvio and Daniele for the dynamics explanation and help
The project is about task 1.10. The objective is to use optimal control to generate optimized reference control inputs (torques) for the iCub robot such that it performs some motion at the COM (like swing left right and keep balancing).
To reach the goal forward dynamics is needed and a simple version with the Whole Body Interface is developed.
The optimal control problem is solved with the softwre package MUSCOD-II of University of Heidelberg.
Learning action representations from experience (in the context of the Xperience EU project)
Goal: create a simple demo of learning action representations on the iCub. See here for a complete list of tasks.
- Vadim: generate symbolic action/state descriptions from the iCub database (subtask 6.1). Data should look something like this:
- (AND (HANDEMPTY LEFT) (INHAND nesquik RIGHT) (ROBOTAT TABLE) (ON multivitaminjuice TABLE) (ON popcorn TABLE) )
- (PUTDOWN RIGHT TABLE nesquik)
- (AND (HANDEMPTY LEFT) (HANDEMPTY RIGHT) (ROBOTAT TABLE) (ON multivitaminjuice TABLE) (ON nesquik TABLE) (ON popcorn TABLE) )
- Kira: learn planning representations from the action/state descriptions in offline batch mode (subtask 6.2) and incrementally online (subtask 6.3). Results should look something like this:
- (:action PUTDOWN
- :parameters (?X1 ?X2 ?X3 )
- :precondition (AND (ROBOTAT ?X2) (INHAND ?X3 ?X1) )
- :effect (AND (HANDEMPTY ?X1) (ON ?X3 ?X2) ))
- (:action GRASP
- :parameters (?X1 ?X2 ?X3 )
- :precondition (AND (HANDEMPTY ?X1) (ROBOTAT ?X2) (ON ?X3 ?X2) )
- :effect (INHAND ?X3 ?X1) )
Eye Tracking for Human Robot Interaction
People involved: Oskar Palinko
Face detection, face tracking, feature extraction, gaze tracking.
YARP/ROS Integration with MoveIt!
People involved: Miguel
Be able to plan the arm trajectories of the iCub on MoveIt, and, to populate the environment in ROS with the info from the iCub stereo vision.
Manage to get Vizzy's (a robot from my lab) controlboards simulated on Gazebo, using the previous work on iCub/Gazebo.
3D reconstruction of an object from multiple views
Contributors: Alessio Mauro Franchi, Evgenii Koriagin, Ilaria Gori, Ugo Pattacini
The aim of the project is to build a 3D object model registering multiple views.
Floating base rigid body dynamics estimation
Contributors: Jorhabib Eljaik, Francesco Nori
We will consider the problem of estimating the dynamics of a floating base system. We will start with the simple problem of estimating the dynamics of a single rigid body with distributed force/torque sensors, accelerometers and gyroscopes. Adopted techniques will include Kalman filtering.
Balancing on a single foot
Contributors: Daniele Pucci, Silvio Traversaro
Test the balancing demo developed for the first year of the CoDyCo project with the robot standing on a single foot.
Graphical tools for ZMP and stability region visualizers
Contributors: Morteza Azad.
This module will be a graphical visualization tool for ZMP, balance etc. It will essentially be a 2D plot visualizing the feet (shape can be extracted from CAD drawings), the support polygon, the stability region, the ZMP, the COP, etc. It will integrate information provided by the dynamics modules, the skin information, etc.
Matlab API to the whole body interface
Contributors: Naveen Kuppuswamy, Jorhabib Eljaik
The WBI-Toolbox offers a Simulink interface to the whole body interface. Certain parts of the WBI can offer some important functionalities in Matlab. As an example, the wholeBodyModel should have a Matlab interface to allow inverse and forward dynamics computations in Matlab. This will allow numerical integration in matlab outside simulink of the the forward dynamics components. Potentially will also allow porting to GNU octave and give the WBI toolbox an FOSS application.
Proposed interface example :
[mass_matrix] = mexWholeBodyModel('mass_matrix',q,q_dot);
[lowerJoingLimits, higherJointLimits] = mexWholeBodyModel('joint_limits');
[JacobianCoM] = mexWholeBodyModel('Jacobian','CoM',q,q_dot);
Floating base forward dynamics
Contributors: Naveen Kuppuswamy, Daniele Pucci, Jorhabib Eljaik
In this task we aim at computing the floating base forward dynamics. The idea is to provide some preliminary implementations that assume a floating base system subject to a number of rigid constraints. The implementation could make use of the formulae presented by Aghili, 2005 and reported in formula (6.18) of Andrea Del Prete PhD thesis.
The second aim is to achieve numerical integration using ode15s to have high speed torque control prototyping. For instance, one demo scenario is to input torques from the dancing icub demo (moving CoM position while balancing) and have the output of integration stored as a timeseries which can then be replayed on the the robot visualisation for benchmarking. This can achieve benchmarking of novel torque controllers free of unknown errors such as unmodelled dynamics, software bugs in the complex architecture etc.
Integrating the ISIR whole-body controller with WBI and iCubGazebo
Contributors: Darwin Lau, Mingxing Liu, Ryan Lober
The whole-body controller developed at ISIR has shown promising results as a whole-body torque-based controller in our simulation software using iCub models. Our goal is to integrate this controller with the Whole-Body-Interface (WBI) of the iCub in order to begin testing the controller with different simulators and more importantly in real world implementations.
Arm-contact assisted reactive posture control using shoulder virtual-springs model
In this scenario the iCub stands and holds the rigid object in the environment with the arm/s (for the sake of simplicity, this description assumes planar system). The task is to balance in the presence of external perturbations. The idea of the proposed balance model is to divide iCub kinematic structure into two kinematic chains. One from the foot (first point of contact) to shoulder and the other from hand (second point of contact) to shoulder. Each of these chains acts as a virtual spring with cartesian stiffness K1 and K2. We set the reference shoulder position to the desired position (for example upright stance). When the perturbation force is applied to the robot body, the shoulder is displaced according to the stiffness parameters. The virtual springs oppose the perturbation and make the shoulder tend toward the reference position.
This idea was was inspired by the study of human postural control with hand contact as a part of CoDyCo project.
"Emergence" of Behaviors
The tasks are:
Jimmy Baraglia: Build the robot self internal intention/action model and program a self-supervised learning motivated by the minimization Prediction-error minimization when observing others.
Joshua Pepneck: Recognize object directed actions and gaze to the "most interesting" point in the image.
Takuji Yoshida: Control the robot to perform simple object directed actions,
See here for more details..
Object Manipulation, grasping and exploration tasks
Visually guided grasping
Contributors: Mihai Gansari
Use stereo vision to detect object(s) and their location. Plan positioning of the hand to bring it close to the object and grasp/push the object. Using already implemented software or developing new techniques. Short term goal is to make the robot grab/push the object and send it to the other hand and viceversa. Long term goal is to obtain a playing robot by throwing an object from one hand to the other continously.
Independent Motion Detection
Contributors: Sriram Kumar K
The goal is to do independent motion detection by weakly supervised learning. Reimplementing "Weakly Supervised Strategies for Natural Object Recognition in Robotics". The task is of two steps 1) Egomotion Learning and 2) Anomaly Detection. The idea is to learn the normal behavior of the optical flow in presence of pure egomotion, thus we can model independent moving objects as anomalies of the optical flow.
Contributors: Jan Schneider, Zahra Gharaee, Christos Melidis, Luke Boorman
Use temporal population coding and self-organizing maps to classify and differentiate gestures retrieved from Kinect data and then move the iCub to imitate the seen action.
Object recognition and categorization
Contributors: Giulia Pasquale
The goal of this (long-term) project is to build a robust recognition system to be run on the iCub during demos in Human-Robot Interaction settings. To this end, the following incremental tasks have been planned:
- Extension of the iCubWorld Dataset (http://www.iit.it/it/projects/data-sets.html) with periodic acquisitions of images.
- Benchmark of state of the art methods for object recognition on the acquired dataset, to determine the limits of such methods with respect to a robotic scenario.
- Individuation/formulation of a suited recognition system, focusing in particular to the learning of the visual representation, fundamental prerequisite for the downstream classification step.
- Adaptation of the algorithm to an online setting and implementation in the Yarp/iCub platform.
The short-term goals for this VVV14 school are (i) to set up a fast acquisition procedure for the dataset and (ii) to begin the benchmarking phase.