VVV15 tasks

From Wiki for iCub and Friends
Jump to: navigation, search

Contents

Task 1: Interactive Object Learning ++

Contributors (alphabetical order): (Alessandro, Lorenzo, Ugo, Vadim)

The interactive object learning demonstration could be improved in many ways:

Subtask 1.1: Add some human-robot interaction components

Contributors:

  • Look around to gaze at people (Reza's work)
  • Learn and recognize faces with a more natural robot interaction (Vadim's work)

Subtask 1.2: Smartphone in the loop

Contributors (alphabetical order): (Alessandro, Francesco R., Vadim) This also links to SubTask 3.2

  • Speech analysis and send commands to the robot
  • Object recognition

Task 2: Evaluate real-time extensions in complex motor control tasks

Contributors: Ali, Lorenzo, Silvio?, Francesco(?), Luca?, Enrico?

Robots: iCub & Coman

Subtask 2.1 Perform fine tuning of control loops in complex tasks using real-time extensions in YARP

Subtask 2.2 This discussion also involves the design and development of a real-time execution layer for the robotinterface

Task 3: Improve the platform as we know it

Subtask 3.1 Improve Gazebo Simulator

To better serve the iCub community, the current simulation of iCub on Gazebo should reach feature parity with iCub_SIM. Further details on which features the Gazebo simulation of iCub is lacking with respect to iCub_SIM, please check https://github.com/robotology/gazebo-yarp-plugins/issues/132 .

Add a full simulation of the eyes' system

The iCub Gazebo simulation is currently lacking a way of simulating the moving eyes of iCub.

Further details on this task can be found in https://github.com/robotology/gazebo-yarp-plugins/issues/149 .

Add a full simulation of the hands' system

The iCub Gazebo simulation is currently lacking a way of simulating the hands of the iCub.

Further details on this task can be found in https://github.com/robotology/gazebo-yarp-plugins/issues/197 .

Add world interface

iCub_SIM has an handy yarp RPC interface for creating/deleting objects.

It would be nice to have a similar interface in Gazebo, for more info please check https://github.com/robotology/gazebo-yarp-plugins/issues/159 .

Add Windows support

The latest version of Gazebo comes with (experimental) Windows support ( http://gazebosim.org/blog/gazebo_on_windows ).

We still did not try the gazebo-yarp-plugins on Windows, as probably this would require some work on how Gazebo is supporting the loading of external plugins. For more info please check https://github.com/robotology/gazebo-yarp-plugins/issues/74#issuecomment-102297508 .

Subtask 3.2 Smartphone in the loop

Contributors (alphabetical order): (Alessandro, Francesco R., Randazzo, Vadim)

YARPdroid will pave the way...

A minimal working example of a YARP application on Android has already been implemented. Check it out here: https://youtu.be/1N0xf2_C6I0

There is both room for improvements on the app, and for new exciting features! So feel free to suggest any potential application you might find useful.

...and iOSYARP will try to keep up!

Task 4: Event-driven iCub development

Contributors (alphabetical order): (Arren, Chiara, Marko, Samantha, Valentina)

Dynamic Vision Sensors are installed on the Purple iCub. These sensors are biologically inspired cameras that fire events only when the sensor detects a change in intensity, typically when an object/person moves in the field of view, or when the camera itself is moved. The processing of such visual information is drastically different than traditional vision processing, however we can leverage very high temporal resolution, very high dynamic range, and removal of redundant data (in which nothing has changed).

Subtask 4.1 Ego-motion Compensation

Implementing a mechanism to tag (and remove) optical flow events caused by self movement of the robot.

- detection and characterisation of 'corner' events in which optical flow is unaffected by aperture problems.
- correction of aperture-affected normal flow to true flow given detected corners.
- learning flow distribution given individual motor movements.
- tagging flow as caused by ego-motion when matching expected ego-motion distributions.
- Integration with iCub Demonstration (Subtask 4.4)

Subtask 4.2 Integration of iCub with SpiNNaker

Integrating the DVS event-based sensor with parallel event-based processing using the SpiNNaker hardware, for visual attention and control of the iCub

- develop an iCub module to interface YARP with the SpiNNaker board (sending and receiving) with Ethernet interface
- test data throughput using a straight-through network with both replayed and live data
- test attention network with live data, visualising the most 'interesting' area of the image

Simultaneously:

- develop a (super fast) serial connection to the SpiNNaker hardware through the zynq board
- test serial connection with replayed and live data
- characterise speed and data throughput with serial connection

Resulting In:

- Integration of SpiNNaker-based visual attention with iCub Demonstration (Subtask 4.4)

Subtask 4.3 Face Detection

Event-driven face detection implementing an event-based version of the Viola-Jones face detection algorithm

- record event-driven face datasets
- extract facial features inspired by the Viola-Jones algorithm
- perform supervised learning of face / non-face categories
- offline detection of faces, characterising detection accuracy
- online detection of faces
- integration with iCub Demonstration (Subtask 4.4)

Subtask 4.4 Interactive iCub Demonstration

Incorporating event-driven Optical Flow and Ball tracking with existing iCub modules to perform the 'red-ball demo' using the event-driven cameras. Any coloured ball could be used.

- characterise tracking performance using noisy datasets, with secondary moving objects (non-circular) and camera movement 
- validate tracking positions are correctly translated to robot frame coordinates, assuming fixed x distance, testing with gaze controller
- extend the gaze controller to the grasping with the demoGraspManager

Incorporating other modules developed at the school:

- Incorporate modules for ego-motion compensation to improve tracking reliability
- Incorporating the SpiNNaker attention network for comparison with the tracking grasp demo (performance and speed)
- Incorporating face detection for gazing at people's faces

Task 5: Development for COMAN

Contributors (alphabetical order): (Enrico MH, Luca M)

Subtask 5.1 From Simulation to Real Robot

Develop simple tasks first in GAZEBO Simulator and then move to the real robot.

Subtask 5.2 Use OpenSoT in COMAN

Develop simple high level tasks involving OpenSoT and COMAN.

Subtask 5.3 Use OpenSoT in iCub

Develop simple high level tasks involving OpenSoT and iCub.

Task 6: Tactile Perception for iCub

Contributors: Massimo Regoli, Nawid Jamali and Takato Horii

Subtask 6.1 Object contour following using tactile sensors

Subtask 6.2 Object grasping using tactile sensors

Use tactile feedback to grasp objects applying a controlled force

Subtask 6.3 Object recognition using tactile sensors

Subtask 6.4 (Dynamic) touch recognition using tactile sensors

To recognize (dynamic) touch using conditional restricted boltzmann machines

Task 7: From two feet balancing to balancing on a seesaw on iCubGenova02 (purple)

Contributors: D. Pucci, F. Romano, S. Traversaro, N. Guedelha

Subtask 7.1 Fine calibration of the robot joints

Subtask 7.2 Identification of the motors' parameters

Subtask 7.3 Whole body impedance

Subtask 7.4 Balancing on two feet

Subtask 7.5 Balancing on one foot

Subtask 7.6 Balancing on a seesaw

Task 8: Proactive Tagging

Contributors: T. Fischer, M. Petit, A.-L. Mealier, U. Pattacini, C. Moulin-Frier, J. Puigbo, J. Copete, B. Higy (and open to others); also G. Pointeau remotely from Lyon

The aim is to learn language proactively. There will be different `drives` 1) to explore, 2) understand the world and 3) triggering questions for labeling unknown objects in the objects properties collector. Pro-active tagging will be triggered because of either an (at the moment) unachievable goal, or because of the drive to explore.

Github repository: https://github.com/robotology/wysiwyd.git

Subtask 8.1: The simple version

The robot first behaves autonomously (using e.g. the allostatic controller for self exploration of objects). The robot has the drive to explore / understand the world. Then, the human enters the scene and puts a new object on the table. Because the robot wants to understand the world, the robot asks the name of the new object. The human replies with the name of the object, which is then renamed in the OPC.

Subtask 8.2: How to achieve a goal with unknown objects

Same idea as in subtask 8.1, however this time the human places two objects on the table and asks the iCub to move the "cup". Because the robot does not know the object "cup", the robot is asking which object it is. Then, the human points on the object / moves it around to increase the salience. The object name is updated in the OPC, and the object is moved. Now, because the robot wants to understand the world but still doesn't know the name of the second object, the robot is asking the human for the name. Then, we are back to subtask 8.1.

Subtask 8.3: But what if some objects are known (and others not)?

Here, the idea is the same as in subtask 8.2, except that one of the objects the human places on the table is known (the "banana"). The human is asking the "cup" to be moved, and because the first object is a "banana", the robot infers that the second object is the "cup". So the object is renamed in the OPC, and then the "cup" is moved as in subtask 8.2.

Subtask 8.4: Proactive tagging to understand the self

This scenario is not about external objects, but about the self of the robot. The aim is to learn a correspondence between the joints of the robot, their names and the corresponding tactile sensors. We will first move individual fingers (by joint number) and save the images in an autobiographical memory. Then, we apply kinematic structure learning to extract the moving part. This will allow to infer a correspondence between joint and a cluster in the visual space. The iCub can then show the extracted cluster and/or move the joint and ask the human for the name of the finger. Therefore, a correspondence joint+cluster+name is learnt. Still, the corresponding taxel sensors are unknown. The robot can then ask a human to touch finger X to learn this correspondence. Then, joint+cluster+name+taxel is known to the robot, and the drive to explore the own body is satisfied.

Task 9: crawling on iCub/Coman

Contributors: S. Ivaldi, D. Goepp, F. Nori

We would like to make iCub or ROMAN to crawl on the floor, simply going straight forward.

For this, we would like to recycle code that was written formerly. As it is quite old, we will have to port it to the current YARP and iCub interface.

Task 10: Estimating iCub dynamic quantities

Contributors: F. Nori, S. Traversaro, C. Latella

Goal of the task is to estimate dynamic quantities by fusing different kind of measurements coming from iCub sensor distribution: gyroscopes, accelerometers and force/torque sensors. The quantities will be acquired real-time through YARP middleware and then estimated using a probabilistic framework on Simulink. We would like to replicate the wholeBodyDynamicsTree module (for internal and external torque estimation module) within BERDY (Bayesian Estimation for Robot Dynamics), which performs the same estimation in presence of multiple redundant measurements. Estimation is performed as a maximum-a-posteriori (MAP) strategy and it is framed in a Bayesian framework.

SubTask 10.1: Creating BERDY data structure

Contributors: F. Nori, S. Traversaro, C. Latella

Extract measurements coming real-time from the iCub and clustering in a data structure compatible with BERDY inputs.

SubTask 10.2: Implementing BERDY on Simulink

Implement BERDY algorithm (already existing on Matlab code) on Simulink.

SubTask 10.3: Estimate joint offsets

Contributors: F. Nori

Estimate joint offsets from measurements using a differential version of BERDY.

TASK 11: Analysis and Control of iCub joint elasticity

Contributors: N. Guedelha, D. Pucci

iCub joints have some degree of flexibility that can significantly impair the performances of the controller while performing a given task, which can be as simple as trajectory tracking. The joint flexibility may come from several sources, such as the harmonic drive itself (“soft” transmission element Flexspline) or from the transmission belts integrated in iCub. Furthermore, in the case where the actuators are intrinsically elastic, i.e. they are Seri es Elastic Actuators, dealing with the joint elasticity becomes often necessary. We have at our disposal an iCub hardware with high sensing capabilities and joints with a lockable Series Elastic feature. The goal is to use these capabilities to measure each joint deformation, identify its parameters, and include these parameters in the controller model. In the following subtasks, the controller performance refers to the position trajectory accuracy.

Subtask 11.1: Test and evaluate the accuracy of a position controlled trajectory

Subtask 11.2: Evaluate the impact of Series elastic joints on the controller performance

Subtask 11.3: Identify the joint flexibility/elasticity parameters (harmonic drives & Serial Elastic feature)

Subtask 11.4: Integrate the joint flexibility model in the controller

Subtask 11.5: Verify the new controller performance with Series Elastic feature locked

Subtask 11.6: Verify the new controller performance with Series Elastic feature unlocked

Task 12: Object Affordances Exploration

Contributors: (Francesca, Francesco N., Ugo, Vadim, Daniele D.)

Replicate the experiment presented in "Exploring affordances and tool use on the iCub - V. Tikhanoff, U. Pattacini, L. Natale, G. Metta (Humanoids 2013)" by leveraging a Bayesian network model in place of a Least Square Support Vector Machine

Subtask 12.1 Calling yarp from Matlab

Integrate the Matlab BN model within the Yarp framework (Francesca, Daniele D.)

Subtask 12.2 Exploration of object affordances

Build an application that allows the iCub to explore object affordances, guiding the exploration process by the variance of the learned model prediction (Francesca, Ugo, Vadim, Francesco N.)


Task 13: Analyzing the motion of iCub

Contributors: Roberto Barone, Carlos Cardoso, Alessia Vignolo (and open to others!)

The aim is to evaluate the biological motion of iCub.

Subtask 13.1 Define the movements, make the robot execute them and record

Define some robot's movements specifically designed to be "biological" or "not biological", from the more artificial ones (as a circular, elliptical or 8-shaped trajectory followed by the hand; following the 2/3 power law or not following it), to the more natural ones (reaching or lifting of an object; with bell shaped velocity or constant velocity profile). The robot should execute them, recording the end-effector pose and velocity as well as the video sequence from the cameras.

Subtask 13.2 Analyze the artificial movements

In the case of the artificial movements, we already know the ground truth about the "biologicity" (indeed we know if the movement is biological or it is not as we designed the movement first, following or not following the 2/3 power law). We should then use a vision pipeline to see if, starting from the video sequence recorded, it can classify correctly the movement as bio or not bio.

Subtask 13.3 Analyze the natural movements

In the case of the natural movements, we should first af all analyze the kinematics recorded and see if it follows the 2/3 power law or it does not. We should then use a vision pipeline to see if, starting from the video sequence recorded, it can classify correctly the movement as bio or not bio.