Difference between revisions of "VVV14 tasks"

From Wiki for iCub and Friends
Jump to: navigation, search
(Subtask 1.10: Optimal control for COM motion)
(Subtask 1.10: Optimal control for COM motion)
Line 87: Line 87:
  
 
The target motion is to keep the robot feet fixed on the ground and make the COM position vary.
 
The target motion is to keep the robot feet fixed on the ground and make the COM position vary.
Control torques will be generated within the software package [http://www.iwr.uni-heidelberg.de/~agbock/RESEARCH/muscod.php MUSCOD-II] from University of Heidelberg, considering whole body dynamics. It is expected to use the WholeBodyInterface implemented for iCub in this context.
+
Control torques in open loop will be generated within the software package [http://www.iwr.uni-heidelberg.de/~agbock/RESEARCH/muscod.php MUSCOD-II] from University of Heidelberg, considering whole body dynamics. The idea is to then implement a simple feedback to use these controls on the real robot.
 +
It is expected to use the WholeBodyInterface implemented for iCub in this context.
  
 
== Task 2: WYSIWYD EU Project ==
 
== Task 2: WYSIWYD EU Project ==

Revision as of 12:19, 21 July 2014

Contents

Task 1: Torque controlled whole body motion

Contributors (alphabetical order): Jorhabib Eljaik, Naveen Kuppuswamy, Francesco Nori, Daniele Pucci, Silvio Traversaro

Within the CoDyCo european project (see www.codyco.eu for details), a set of software tools have been developed to implement whole-body torque control techniques. These techniques allow iCub to be one of the few robots capable of balancing by controlling forces and torques at all contact points (see [1] and the related videos). Along this direction, we aim at developing a dynamic walk controller that will give iCub the ability to walk in torque control mode. In addition, the software tools will be also used for goal-directed whole body behaviors employing the torque control mode.

Subtask 1.1: balancing on a single foot

Contributors: Daniele Pucci, Jorhabib Eljaik

The CoDyCo first year demo gave iCub the ability to balance on two feet. A first step towards walking is the implementation of a controller that allows iCub to balance on a single foot. This controller will be implemented by extending the work and the software tools that have been developed for the first year CoDyCo demo.

Subtask 1.2: graphical tools for ZMP and stability region visualizers

Contributors: none so far.

This module will be a graphical visualization tool for ZMP, balance etc. It will essentially be a 2D plot visualizing the feet (shape can be extracted from CAD drawings), the support polygon, the stability region, the ZMP, the COP, etc. It will integrate information provided by the dynamics modules, the skin information, etc.

Subtask 1.3: identification of linear systems

Contributors: none so far.

Many unmodeled dynamics can be characterized as linear systems. For instance, when applying the computed torque control technique, the mismatch between the desired acceleration and the real one can be characterized as a linear system in the first approximation. In order to improve the system performance, it would be a good idea to use linear identification techniques on the linear system. For achieving this subtask, we need a software toolbox to identify a linear system. The toolbox should take pairs of u (input) and y (output) to identify the underlying linear system. There are available libraries that could be used to simplify the identification problem. One of them is SLICOT, which is for Matlab and Fortran and therefore not ideal. A possibility is to create a C/C++ interface to Matlab. The goal of this subtask is to prove that the controller performance (in terms of step-response, tracking error, settling time) are improved.

Subtask 1.4: footstep planner

Contributors: none so far.

A footstep planner is required to implement the walking behavior and to get rid of possible obstacles on the ground. Some recent open source code has been released within the ROS framework (ROS footstep planner) and it would be good to understand if this code can be integrated within the YARP framework.

Subtask 1.5: floating base posture estimation

Contributors: none so far.

When controlling a floating base system, the knowledge of the base pose (i.e. position-and-orientation) is a fundamental information for all controllers. Typically, there is no direct way to estimate the floating base pose, and therefore a suitable estimation procedure is necessary. In this task, we aim at developing the software that implements a Kalman filter like posture estimation. This task requires two main subcomponents: a tool for computing floating base forward dynamics and a tool for computing derivatives of forward and inverse models.

Subtask 1.5.1: floating base forward dynamics

Contributors: Naveen Kuppuswamy, Daniele Pucci, Jorhabib Eljaik

In this task we aim at computing the floating base forward dynamics. The idea is to provide some preliminary implementations that assume a floating base system subject to a number of rigid constraints. The implementation could make use of the formulae presented by Aghili, 2005 and reported in formula (6.18) of Andrea Del Prete PhD thesis.

Subtask 1.5.2: forward and inverse dynamics differentiation (finite differencing)

Contributors: none so far.

In many optimization and estimation context, the derivative of whole-body floating base dynamics play a crucial role. As a first implementation it would be good to start having some finite differencing methodologies of important functions such as Jacobians, forward and inverse dynamics. This implementation is far from being optimal but it is definitively something needed for applications like extended Kalaman filtering and optimal control.

Subtask 1.5.3: forward and inverse dynamics differentiation (automatic differentiation)

Contributors: none so far.

Several optimization and estimation algorithms require the derivative of whole-body floating base dynamics. One of the tools for computing derivatives is the Automatic Differentiation (AD) technique, which is based on the iterative application of the chain rule to obtain the desired derivatives. The task consists in finding an AD library (for the C language and open source) and comparing it against numerical differentiation and symbolic differentiation methods. This comparison aims at obtaining a performance evaluation in terms of accuracy. As starting point, the reader is referred to http://www.autodiff.org/?module=Tools&language=C%2FC%2B%2B for a list of C libraries.

Subtask 1.6: matlab API to the whole body interface

Contributors: Naveen Kuppuswamy, Jorhabib Eljaik

The WBI-Toolbox offers a Simulink interface to the whole body interface. Certain parts of the WBI can offer some important functionalities in Matlab. As an example, the wholeBodyModel should have a Matlab interface to allow inverse and forward dynamics computations in Matlab.

Subtask 1.7: low level torque control

Contributors: Daniele Pucci

This task is about the torque control at the firmware level. Currently, torque control passes through the identification of the motor transfer function, using the motor friction identification module. The idea is to transfer the motor transfer function identified parameters on the firmware controller. This activity has been already started under the supervision of Marco Randazzo.

Subtask 1.8: software interface for the new foot (with double FT sensor)

Contributors: none so far.

The iCub foot was recently modified for the housing of two force/torque (FT) sensors. In order to make this mechanical upgrade back-compatible, it would be necessary to write an ad-hoc CAN driver similar to the canAnalogSensor to read two sensors simultaneously.

Subtask 1.9: bi-manual unknown object impedance estimation

Contributors: Naveen Kuppuswamy, Silvio Traversaro

Bi-manual grasping with Cartesian impedance towards estimating the impedance of unknown objects. Bi-manual grasping must first be implemented with Cartesian impedance control and used to grab various objects such as balloons, beach balls etc. By using regression techniques and some simple movement behaviours the impedances of these objects will be estimated.

Subtask 1.10: Optimal control for COM motion

Contributors: Yue Hu

The target motion is to keep the robot feet fixed on the ground and make the COM position vary. Control torques in open loop will be generated within the software package MUSCOD-II from University of Heidelberg, considering whole body dynamics. The idea is to then implement a simple feedback to use these controls on the real robot. It is expected to use the WholeBodyInterface implemented for iCub in this context.

Task 2: WYSIWYD EU Project

Subtask 2.1: First Person Sensory Motor Database

We will use the robots and datadumper to produce a database of sensory motor interaction from the robot perspective. This database will include, but is not limited to: joints, cameras, kinect data, tactile data. Several interaction types will be used to populate the database (motor babbling, self touch, tracking people, watching objects, people imitating the robot, etc.). Ideally, we will come up with a plan to create an online platform that could be used by the iCub community to exchange such datasets.

Subtask 2.2: Cross-situational word learning

Context: Human and robot are face to face in a shared workspace with several manipulable objects.

Robot: Low level attention system detects toy giraffe on table. Interaction engine (WP5) generates a pointing and gaze (WP6) to the toy giraffe, and a gaze to the human. Primitive representation of giraffe activated in conceptual system.

Human: “Oh look at the giraffe, you see the giraffe?”

Robot: ASR recognizes words. Cross situational learning(*) begins to link recognized words with representation of giraffe in conceptual system (WP4). Over successive examples, binding between the word “giraffe” and the representation of giraffe in the conceptual system dominates.

(*)Cross situational learning: associate all words in the sentence with the current object of focus. Over multiple situations, statistically, the mapping between the object and the referent word will become stronger.

Subtask 2.3: Semantic Bootstrapping

Context: A Game with a the Robot and 3 agent around the table. Each player has a ladder in front of him and a token. Each turn a player can push his own token, or ask another player to pull his token. Each turn is picked randomly. First player with his token in the center wins. The goal for the robot is to detect each action of the game (player moving, player speaking) and to understand the goal of each agent, and understand the interaction between the players.

Subtask 2.4: Movements learning

A module for learning movements (e.g. pointing, pushing, etc.) will be in parallel to the exiting ones. The new module will give the model that describes the learnt movements. Predictions for the oncoming steps in the motion can be computed and compared with the actual positions and an error can be defined that can be used as a feedback for the learning structure.

Contributors: Christos Melidis

Subtask 2.5: Tactile-proprioceptive(-visual) body schema using multi-modal convergence maps (MMCM)

The idea would be to extend the architecture of (Lallee & Dominey 2013) and code (CVZ in the wysiwyd repo) to the tactile modality.

Data generation: iCub self-touching behaviors either using automated double-touch by Alessandro Roncone or idleing joints and generating the data with the help of the experimenter. (The data would flow into the SM database).

Behavior: After learning, we can touch the iCub on a body part or stimulate a skin part artificially (directly in the map) and see whether the MMCM can generate the movement to touch the respective part.

Extension: Spoken language interaction. For example, people touch iCub on different body parts, they would name them. This can also be done when iCub touches itself autonomously. We can in parallel with speech recognizer and grammar systems and directly with sound.

References: Lallee, S., and Dominey, P.F. "Multi-modal convergence maps: from body schema and self-representation to mental imagery." Adaptive Behavior (2013).

Contact/Coordination: If you are interested, please talk to Matej Hoffmann (matej DOT hoffmann AT iit DOT it) or Stephane Lallee (stephane DOT lallee AT gmail DOT com).

Contributors: Christos Melidis

Subtask 2.6: Enforce back-compatibility of speechRegonizer

The module speechRecognizer is in charge of speech recognition task within the project, but is used widely also in other applications (e.g. IOL). Its new implementation in C++ needs to be made back-compatible with its origanl C# version.

Task 3: Object Manipulation, grasping and exploration

This task groups several projects related to object grasping using vision and tactile feedback.

Subtask 3.1: Visually guided grasping

Use stereo vision to detect objects and their shape. Plan positioning of the hand to bring fingers close to the object. We can use software already implemented or develop new techniques.

Subtask 3.2: Gentle grasp

Use tactile feedback to regulate the force exerted by the fingers on the object. Positioning of the hand can be either preprogrmamed if the object is in a known position or driven by vision. Regulation can be done using traditional techniques (PID) or using learning (reinforcement learning, PI^2) to maximize garsp success (lifting).

Subtask 3.3: Object exploration

Extract features from the fingertips to explore objects using local features like corners and edges.

References:

  • Martinez-Hernandez, U, Dodd T, Natale L, Metta G, Prescott TJ, Lepora N. 2013. Active contour following to explore object shape with robot touch. IEEE World Haptics Conference 2013.
  • Using Bayesian Filtering to Localize Flexible Materials During Manipulation, Robert Platt, Jr., Frank Permenter, and Joseph Pfeiffer, IEEE TRANSACTIONS ON ROBOTICS, VOL. 27, NO. 3, JUNE 2011

Subtask 3.4: Object localization

Object localization using tactile or force feedback. Use haptic interaction to localize objects.

References:

  • Anna Petrovskaya and Oussama Khatib, Global Localization of Objects via Touch IEEE TRANSACTIONS ON ROBOTICS, VOL. 27, NO. 3, JUNE 2011.
  • K. Gadeyne and H. Bruyninckx, “Markov techniques for object localization with force-controlled robots,” in Proc. Int. Conf. Advanced Robot, 2001, pp. 91–96.

Subtask 3.5: Merging point clouds of multiple views

3D object completion using registration procedures such as Iterative Closest Point (ICP) algorithm. Right now, we have an algorithm that performs power grasps using only the visible (frontal) part of the object, for further information see

I.Gori, U. Pattacini, V. Tikhanoff, G. Metta, Ranking the Good Points: A Comprehensive Method for Humanoid Robots to Grasp Unknown Objects, ICAR, 2013.

For this subtask we would like to register several different views of the object (we can use the "explore" routine in the actionsRenderingEngine module to retrieve the views) to obtain a more complete reconstruction.

Task 4: Object recognition detection and localization

This task groups activities related to object learning and recognition. Different techniques can be explored and compared against a robot related database.

Task 4.1 Learning object recognition and localization

Task 4.2 Determine the full pose (position and orientation) of objects using local features.

Task 4.3 Improve approximate object pose estimates using the ICP algorithm.

References:

A. Collet Romea, D. Berenson, S. Srinivasa and D.Ferguson, "Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation". Proc. ICRA 2009.

M. Lourakis and X. Zabulis, "Model-based Pose Estimation for Rigid Objects". Proc. ICVS 2013.

U. Castellani and A. Bartoli, "3D Shape Registration". In Pears, N., Liu, Y., Bunting,P., eds. 3D Imaging, Analysis and Applications. Springer London 2012.