VVV14 tasks

From Wiki for iCub and Friends
Jump to: navigation, search

Contents

Task 1: Torque controlled whole body motion

Contributors (alphabetical order): Jorhabib Eljaik, Naveen Kuppuswamy, Francesco Nori, Daniele Pucci, Silvio Traversaro

Within the CoDyCo european project (see www.codyco.eu for details), a set of software tools have been developed to implement whole-body torque control techniques. These techniques allow iCub to be one of the few robots capable of balancing by controlling forces and torques at all contact points (see [1] and the related videos). Along this direction, we aim at developing a dynamic walk controller that will give iCub the ability to walk in torque control mode. In addition, the software tools will be also used for goal-directed whole body behaviors employing the torque control mode.

Subtask 1.1: balancing on a single foot

Contributors: Daniele Pucci, Jorhabib Eljaik

The CoDyCo first year demo gave iCub the ability to balance on two feet. A first step towards walking is the implementation of a controller that allows iCub to balance on a single foot. This controller will be implemented by extending the work and the software tools that have been developed for the first year CoDyCo demo.

Subtask 1.2: graphical tools for ZMP and stability region visualizers

Contributors: none.

This module will be a graphical visualization tool for ZMP, balance etc. It will essentially be a 2D plot visualizing the feet (shape can be extracted from CAD drawings), the support polygon, the stability region, the ZMP, the COP, etc. It will integrate information provided by the dynamics modules, the skin information, etc.

Subtask 1.3: identification of linear systems

Contributors: none so far.

Many unmodeled dynamics can be characterized as linear systems. For instance, when applying the computed torque control technique, the mismatch between the desired acceleration and the real one can be characterized as a linear system in the first approximation. In order to improve the system performance, it would be a good idea to use linear identification techniques on the linear system. For achieving this subtask, we need a software toolbox to identify a linear system. The toolbox should take pairs of u (input) and y (output) to identify the underlying linear system. There are available libraries that could be used to simplify the identification problem. One of them is SLICOT, which is for Matlab and Fortran and therefore not ideal. A possibility is to create a C/C++ interface to Matlab. The goal of this subtask is to prove that the controller performance (in terms of step-response, tracking error, settling time) are improved.

Subtask 1.4: footstep planner

Contributors: none so far.

A footstep planner is required to implement the walking behavior and to get rid of possible obstacles on the ground. Some recent open source code has been released within the ROS framework (ROS footstep planner) and it would be good to understand if this code can be integrated within the YARP framework.

Subtask 1.5: floating base posture estimation

Contributors: none so far.

When controlling a floating base system, the knowledge of the base pose (i.e. position-and-orientation) is a fundamental information for all controllers. Typically, there is no direct way to estimate the floating base pose, and therefore a suitable estimation procedure is necessary. In this task, we aim at developing the software that implements a Kalman filter like posture estimation. This task requires two main subcomponents: a tool for computing floating base forward dynamics and a tool for computing derivatives of forward and inverse models.

Subtask 1.5.1: floating base forward dynamics

Contributors: Naveen Kuppuswamy, Daniele Pucci, Jorhabib Eljaik

In this task we aim at computing the floating base forward dynamics. The idea is to provide some preliminary implementations that assume a floating base system subject to a number of rigid constraints. The implementation could make use of the formulae presented by Aghili, 2005 and reported in formula (6.18) of Andrea Del Prete PhD thesis.

Subtask 1.5.2: forward and inverse dynamics differentiation (finite differencing)

Contributors: none so far.

In many optimization and estimation context, the derivative of whole-body floating base dynamics play a crucial role. As a first implementation it would be good to start having some finite differencing methodologies of important functions such as Jacobians, forward and inverse dynamics. This implementation is far from being optimal but it is definitively something needed for applications like extended Kalaman filtering and optimal control.

Subtask 1.5.3: forward and inverse dynamics differentiation (automatic differentiation)

Contributors: none so far.

Several optimization and estimation algorithms require the derivative of whole-body floating base dynamics. One of the tools for computing derivatives is the Automatic Differentiation (AD) technique, which is based on the iterative application of the chain rule to obtain the desired derivatives. The task consists in finding an AD library (for the C language and open source) and comparing it against numerical differentiation and symbolic differentiation methods. This comparison aims at obtaining a performance evaluation in terms of accuracy. As starting point, the reader is referred to http://www.autodiff.org/?module=Tools&language=C%2FC%2B%2B for a list of C libraries.

Subtask 1.6: matlab API to the whole body interface

Contributors: Naveen Kuppuswamy, Jorhabib Eljaik

The WBI-Toolbox offers a Simulink interface to the whole body interface. Certain parts of the WBI can offer some important functionalities in Matlab. As an example, the wholeBodyModel should have a Matlab interface to allow inverse and forward dynamics computations in Matlab.

Subtask 1.7: low level torque control

Contributors: Daniele Pucci

This task is about the torque control at the firmware level. Currently, torque control passes through the identification of the motor transfer function, using the motor friction identification module. The idea is to transfer the motor transfer function identified parameters on the firmware controller. This activity has been already started under the supervision of Marco Randazzo.

Subtask 1.8: software interface for the new foot (with double FT sensor)

Contributors: none so far.

The iCub foot was recently modified for the housing of two force/torque (FT) sensors. In order to make this mechanical upgrade back-compatible, it would be necessary to write an ad-hoc CAN driver similar to the canAnalogSensor to read two sensors simultaneously.

Subtask 1.9: bi-manual unknown object impedance estimation

Contributors: Naveen Kuppuswamy, Silvio Traversaro

Bi-manual grasping with Cartesian impedance towards estimating the impedance of unknown objects. Bi-manual grasping must first be implemented with Cartesian impedance control and used to grab various objects such as balloons, beach balls etc. By using regression techniques and some simple movement behaviours the impedances of these objects will be estimated.

Subtask 1.10: Optimal control for COM motion

Contributors: Yue Hu

The target motion is to keep the robot feet fixed on the ground and make the COM position vary. Control torques in open loop will be generated within the software package MUSCOD-II from University of Heidelberg, considering whole body dynamics. The idea is to then implement a simple feedback to use these controls on the real robot. It is expected to use the WholeBodyInterface implemented for iCub in this context.

Task 2: WYSIWYD EU Project

Click on the link in the title for more details.

Subtask 2.1: First Person Sensory Motor Database

We will use the robots and datadumper to produce a database of sensory motor interaction from the robot perspective. This database will include, but is not limited to: joints, cameras, kinect data, tactile data. Several interaction types will be used to populate the database (motor babbling, self touch, tracking people, watching objects, people imitating the robot, etc.). Ideally, we will come up with a plan to create an online platform that could be used by the iCub community to exchange such datasets.

Subtask 2.2: Cross-situational word learning

Contributors: Grégoire Pointeau.

Context: Human and robot are face to face in a shared workspace with several manipulable objects.

Robot: Low level attention system detects toy giraffe on table. Interaction engine (WP5) generates a pointing and gaze (WP6) to the toy giraffe, and a gaze to the human. Primitive representation of giraffe activated in conceptual system.

Human: “Oh look at the giraffe, you see the giraffe?”

Robot: ASR recognizes words. Cross situational learning(*) begins to link recognized words with representation of giraffe in conceptual system (WP4). Over successive examples, binding between the word “giraffe” and the representation of giraffe in the conceptual system dominates.

(*)Cross situational learning: associate all words in the sentence with the current object of focus. Over multiple situations, statistically, the mapping between the object and the referent word will become stronger.

Subtask 2.3: Semantic Bootstrapping

Contributors : Anne-Laure Mealier, Maxime Petit

Context: A Game with a the Robot and 3 agent around the table. Each player has a ladder in front of him and a token. Each turn a player can push his own token, or ask another player to pull his token. Each turn is picked randomly. First player with his token in the center wins. The goal for the robot is to detect each action of the game (player moving, player speaking) and to understand the goal of each agent, and understand the interaction between the players.

Subtask 2.4: Movements learning

A module for learning movements (e.g. pointing, pushing, etc.) will be in parallel to the exiting ones. The new module will give the model that describes the learnt movements. Predictions for the oncoming steps in the motion can be computed and compared with the actual positions and an error can be defined that can be used as a feedback for the learning structure.

Contributors: Christos Melidis

Subtask 2.5: Tactile-proprioceptive(-visual) body schema using multi-modal convergence maps (MMCM)

The idea would be to extend the architecture of (Lallee & Dominey 2013) and code (CVZ in the wysiwyd repo) to the tactile modality.

Data generation: iCub self-touching behaviors either using automated double-touch by Alessandro Roncone or idleing joints and generating the data with the help of the experimenter. (The data would flow into the SM database).

Behavior: After learning, we can touch the iCub on a body part or stimulate a skin part artificially (directly in the map) and see whether the MMCM can generate the movement to touch the respective part.

Extension: Spoken language interaction. For example, people touch iCub on different body parts, they would name them. This can also be done when iCub touches itself autonomously. We can in parallel with speech recognizer and grammar systems and directly with sound.

References: Lallee, S., and Dominey, P.F. "Multi-modal convergence maps: from body schema and self-representation to mental imagery." Adaptive Behavior (2013).

Contact/Coordination: If you are interested, please talk to Matej Hoffmann (matej DOT hoffmann AT iit DOT it) or Stephane Lallee (stephane DOT lallee AT gmail DOT com).

Contributors: Christos Melidis

Subtask 2.6: Enforce back-compatibility of speechRegonizer

The module speechRecognizer is in charge of speech recognition task within the project, but is used widely also in other applications (e.g. IOL). Its new implementation in C++ needs to be made back-compatible with its original C# version.

Subtask 2.7: Plug ARE in the WYSIWYD system

Use ARE to perform pointing and grasping within the WYSIWYD SW structure through the so-called iCub clients I/F.

Contributors: Anne-Laure MEALIER

Task 3: Object Manipulation, grasping and exploration

This task groups several projects related to object grasping using vision and tactile feedback.

Subtask 3.1: Visually guided grasping

Participant(s): Mihai Gansari

Use stereo vision to detect objects and their shape. Plan positioning of the hand to bring fingers close to the object. We can use software already implemented or develop new techniques.

Subtask 3.2: Gentle grasp

Use tactile feedback to regulate the force exerted by the fingers on the object. Positioning of the hand can be either preprogrmamed if the object is in a known position or driven by vision. Regulation can be done using traditional techniques (PID) or using learning (reinforcement learning, PI^2) to maximize garsp success (lifting).

Subtask 3.3: Object exploration

Extract features from the fingertips to explore objects using local features like corners and edges.

References:

  • Martinez-Hernandez, U, Dodd T, Natale L, Metta G, Prescott TJ, Lepora N. 2013. Active contour following to explore object shape with robot touch. IEEE World Haptics Conference 2013.
  • Using Bayesian Filtering to Localize Flexible Materials During Manipulation, Robert Platt, Jr., Frank Permenter, and Joseph Pfeiffer, IEEE TRANSACTIONS ON ROBOTICS, VOL. 27, NO. 3, JUNE 2011

Subtask 3.4: Object localization

Object localization using tactile or force feedback. Use haptic interaction to localize objects.

References:

  • Anna Petrovskaya and Oussama Khatib, Global Localization of Objects via Touch IEEE TRANSACTIONS ON ROBOTICS, VOL. 27, NO. 3, JUNE 2011.
  • K. Gadeyne and H. Bruyninckx, “Markov techniques for object localization with force-controlled robots,” in Proc. Int. Conf. Advanced Robot, 2001, pp. 91–96.

Subtask 3.5: Merging point clouds of multiple views

3D object completion using registration procedures such as Iterative Closest Point (ICP) algorithm. Right now, we have an algorithm that performs power grasps using only the visible (frontal) part of the object, for further information see

I.Gori, U. Pattacini, V. Tikhanoff, G. Metta, Ranking the Good Points: A Comprehensive Method for Humanoid Robots to Grasp Unknown Objects, ICAR, 2013.

Subtask 3.5.1: Exploration for multiple views of objects

Contributors: Alessio Mauro Franchi

For this subtask we would like to create a module for non-continuos exploration of objects to get multiple views of them; exploration will involve both gaze and torso movements and will use waypoints; interaction with other modules is required to ack for a new position received and to receive commands.

Subtask 3.5.2: Merging point clouds

Contributors: Evgenii Koriagin

Receive point clouds of the object seen by the robot from different view points and merge these information in one coherent spatial representation using ICP.

Subtask 3.6: Exploration of tool affordances

Participant(s): Lorenzo Jamone, Afonso Gonçalves, Tanis Mar

Build an application that allows the iCub to autonomously explore inter-object affordances based on vision.

Task 4: Object recognition detection and localization

This task groups activities related to object learning and recognition. Different techniques can be explored and compared against a robot related database.

Subtask 4.1 Learning object recognition and localization Participant(s): Sriram Kumar

Subtask 4.2 Determine the full pose (position and orientation) of objects using local features.

Subtask 4.3 Improve approximate object pose estimates using the ICP algorithm.

References:

A. Collet Romea, D. Berenson, S. Srinivasa and D.Ferguson, "Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation". Proc. ICRA 2009.

M. Lourakis and X. Zabulis, "Model-based Pose Estimation for Rigid Objects". Proc. ICVS 2013.

U. Castellani and A. Bartoli, "3D Shape Registration". In Pears, N., Liu, Y., Bunting,P., eds. 3D Imaging, Analysis and Applications. Springer London 2012.

Task 5: YARP/ROS Integration

Subtask 5.1: iCub and Octomap

To be able to convert stereo vision data from the iCub to PCL, in order to fill the Octomap environment used by Gazebo in ROS. Allow the ROS MoveIt! to plan trajectories for the iCub arm, avoiding object colisions.

Subtask 5.2: Simulation of YARP robots on Gazebo

Simulation of a YARP robot on the simulation tool Gazebo, taking advantage of the previous work developed with the iCub on this platform.

Subtask 5.3: ROS Services/Actions Compatibility

Checking the state and possible improvements of YARP/ROS integration, concerning services with complex data structures and ROS actions.

Task 6: Xperience EU Project

Subtask 6.1: Action data collection

Participant(s): Vadim Tikhanoff

This subtask consists of writing a module that queries the memory of the robot (before and after an action is done) and parse it into a label-like format readable by EU partners @ University of Edinburgh.

The first data collection will consist in doing ~80 simple take and drop actions on the iCub robot. This replicates a previous integrated experiment by Edinburgh and KIT.

Subtask 6.2: Run learning algorithm on collected data

Participant(s): Vadim Tikhanoff, Kira Mourão

Use the previously collected data to learn representations of the take and drop actions, suitable for use by a standard automated planner. Below is an example of the expected representations:


(:action PUTDOWN

parameters (?X1 ?X2 ?X3 )
precondition (AND (ROBOTAT ?X2) (INHAND ?X3 ?X1) )
effect (AND (HANDEMPTY ?X1) (ON ?X3 ?X2) ))


(:action GRASP

parameters (?X1 ?X2 ?X3 )
precondition (AND (HANDEMPTY ?X1) (ROBOTAT ?X2) (ON ?X3 ?X2) )
effect (INHAND ?X3 ?X1) )


The results will confirm that the data collected from the iCub is good enough and further data collection can be made.

Subtask 6.3: Set up a yarp module to interface with the action learning code

Participant(s): Kira Mourão

Create a yarp module which processes action/observation data incrementally online. The module will generate predictions for the current action, and periodically update the current learnt action representation to account for the most recent observations.

Task 7: "Emergence" of Behaviors

Contributors (Alphabetic order): Jimmy Baraglia Joshua_Pepneck Takuji Yoshida

For this task, we want to investigate the emergence of altruistic behaviors motivated by a very general and low level motivation, which is the minimization of the prediction error. Our hypothesis is that when watching others performing an action, one predicts the outcome and the intention. However, when the prediction is not fulfilled, the prediction error increases and the agent tries to select an action that minimizes it.


Subtask 7.1: build a robot self action model (math)

Contributors: Jimmy Baraglia

General representation of a POMDP

We want to represent robot's internal model using a baysian network, namely here a POMDP (Partialy Observable Markov Decision Process). The state nodes would represent self's intentions, the action nodes would be self's actions and the observation nodes would represent the changes in the environment. When observing others performing actions, the robot will consider that they have the same model and will try to infer their intention based on previous actions and observations.






Subtask 7.2: recognize simple object directed actions (vision)

Contributors: Joshua_Pepneck

We want to be able to recognize others' action as "Reaching for an object", "pushing an object"... The ouput of this subtask will be used by the POMDP (see subtask 8.1).

   1- Localize an object and gaze to it.
   2- Localize an action and gaze on the region of interest (including the object of the action).
   3- Recognize and categorize the actions.

Subtask 7.3: program a self-supervised learning for Prediction-error minimization (NN)

Contributors: none so far

When the prediction error calculated in subtask 8.1 would be too high, the robot will try to minimize it by executing an action. However, we the actions able to minimize the prediction error when watching others performing action can only be decided after a trials and errors process. Here, we want to design a module able to learn from this trials and errors to find the best way to minimize the prediction error based on the context. (Past actions and observation described in subtask 8.1).

Subtask 7.4: control the robot to perform simple actions (iCub)

Contributors: Takuji Yoshida

This task is to make the robot executes the actions decided by the subtask 8.3 to minimize the prediction error.

Task 8: Incremental Learning applications on iCub using the GURLS++ package

Contributor(s): Raffaello Camoriano

See the related project

Task 9: Eye Tracking for Human Robot Interaction

Contributors: Oskar Palinko

  • Localize a face in the iCub's field of view.
  • Track the detected face.
  • Find eye features in the image.

More details will be added soon.

Task 10: Integrating the Whole-Body ISIR Controller with the WBI and iCubGazebo

Contributors (Alphabetic order): Darwin Lau Mingxing Liu Ryan Lober