Difference between revisions of "VVV15 tasks"

From Wiki for iCub and Friends
Jump to: navigation, search
(Task 8: Proactive Tagging)
Line 122: Line 122:
 
=== Subtask 8.3: But what if some objects are known (and others not)? ===
 
=== Subtask 8.3: But what if some objects are known (and others not)? ===
 
Here, the idea is the same as in subtask 8.2, except that one of the objects the human places on the table is known (the "banana"). The human is asking the "cup" to be moved, and because the first object is a "banana", the robot infers that the second object is the "cup". So the object is renamed in the OPC, and then the "cup" is moved as in subtask 8.2.
 
Here, the idea is the same as in subtask 8.2, except that one of the objects the human places on the table is known (the "banana"). The human is asking the "cup" to be moved, and because the first object is a "banana", the robot infers that the second object is the "cup". So the object is renamed in the OPC, and then the "cup" is moved as in subtask 8.2.
 +
 +
== Task 9: crawling on iCub/Coman  ==
 +
''Contributors'': S. Ivaldi, D. Goepp, F. Nori

Revision as of 11:38, 22 July 2015

Contents

Task 1: Interactive Object Learning ++

Contributors (alphabetical order): (Alessandro, Lorenzo, Ugo, Vadim)

The interactive object learning demonstration could be improved in many ways:

Subtask 1.1: Add some human-robot interaction components

Contributors:

  • Look around to gaze at people (Reza's work)
  • Learn and recognize faces with a more natural robot interaction (Vadim's work)

Subtask 1.2: Smartphone in the loop

Contributors (alphabetical order): (Alessandro, Francesco R., Vadim) This also links to SubTask 3.2

  • Speech analysis and send commands to the robot
  • Object recognition

Task 2: Evaluate real-time extensions in complex motor control tasks

Contributors: Ali, Lorenzo, Silvio?, Francesco(?), Luca?, Enrico?

Robots: iCub & Coman

Subtask 2.1 Perform fine tuning of control loops in complex tasks using real-time extensions in YARP

Subtask 2.2 This discussion also involves the design and development of a real-time execution layer for the robotinterface

Task 3: Improve the platform as we know it

Subtask 3.1 Improve Gazebo Simulator

To better serve the iCub community, the current simulation of iCub on Gazebo should reach feature parity with iCub_SIM. Further details on which features the Gazebo simulation of iCub is lacking with respect to iCub_SIM, please check https://github.com/robotology/gazebo-yarp-plugins/issues/132 .

Add a full simulation of the eyes' system

The iCub Gazebo simulation is currently lacking a way of simulating the moving eyes of iCub.

Further details on this task can be found in https://github.com/robotology/gazebo-yarp-plugins/issues/149 .

Add a full simulation of the hands' system

The iCub Gazebo simulation is currently lacking a way of simulating the hands of the iCub.

Further details on this task can be found in https://github.com/robotology/gazebo-yarp-plugins/issues/197 .

Add world interface

iCub_SIM has an handy yarp RPC interface for creating/deleting objects.

It would be nice to have a similar interface in Gazebo, for more info please check https://github.com/robotology/gazebo-yarp-plugins/issues/159 .

Add Windows support

The latest version of Gazebo comes with (experimental) Windows support ( http://gazebosim.org/blog/gazebo_on_windows ).

We still did not try the gazebo-yarp-plugins on Windows, as probably this would require some work on how Gazebo is supporting the loading of external plugins. For more info please check https://github.com/robotology/gazebo-yarp-plugins/issues/74#issuecomment-102297508 .

Subtask 3.2 Smartphone in the loop

Contributors (alphabetical order): (Alessandro, Francesco R., Randazzo, Vadim)

YARPdroid will pave the way...

...and iOSYARP will try to keep up!

Task 4: Event-driven iCub development

Contributors (alphabetical order): (Arren, Chiara, Samantha, Valentina)

Subtask 4.1 Interactive iCub Demonstration

Incorporating recent work with event-driven Optical Flow and Ball tracking with existing iCub modules to perform the 'red-ball demo' using the event-driven cameras. Any coloured ball could be used. (collaboration with Alessandro/Ugo/Vadim?)

Subtask 4.2 Integration of iCub with SpiNNaker

Integrating the DVS event-based sensor with parallel event-based processing using the SpiNNaker hardware, for visual attention and control of the iCub

Task 5: Development for COMAN

Contributors (alphabetical order): (Enrico MH, Luca M)

Subtask 5.1 From Simulation to Real Robot

Develop simple tasks first in GAZEBO Simulator and then move to the real robot.

Subtask 5.2 Use OpenSoT in COMAN

Develop simple high level tasks involving OpenSoT and COMAN.

Subtask 5.3 Use OpenSoT in iCub

Develop simple high level tasks involving OpenSoT and iCub.

Task 6: Tactile Perception for iCub

Contributors: Massimo Regoli and Nawid Jamali

Subtask 6.1 Object contour following using tactile sensors

Subtask 6.2 Object grasping using tactile sensors

Subtask 6.3 Object recognition using tactile sensors

Task 7: From two feet balancing to balancing on a seesaw on iCubGenova02 (purple)

Contributors: D. Pucci, F. Romano, S. Traversaro

Subtask 7.1 Fine calibration of the robot joints

Subtask 7.2 Identification of the motors' parameters

Subtask 7.3 Whole body impedance

Subtask 7.3 Balancing on two feet

Subtask 7.4 Balancing on one foot

Subtask 7.5 Balancing on a seesaw

Task 8: Proactive Tagging

Contributors: T. Fischer, M. Petit, A.-L. Mealier, U. Pattacini, C. Moulin-Frier, J. Puigbo (and open to others); also G. Pointeau remotely from Lyon

The aim is to learn language proactively. There will be different `drives` 1) to explore, 2) understand the world and 3) triggering questions for labeling unknown objects in the objects properties collector. Pro-active tagging will be triggered because of either an (at the moment) unachievable goal, or because of the drive to explore.

Subtask 8.1: The simple version

The robot first behaves autonomously (using e.g. the allostatic controller for self exploration of objects). The robot has the drive to explore / understand the world. Then, the human enters the scene and puts a new object on the table. Because the robot wants to understand the world, the robot asks the name of the new object. The human replies with the name of the object, which is then renamed in the OPC.

Subtask 8.2: How to achieve a goal with unknown objects

Same idea as in subtask 8.1, however this time the human places two objects on the table and asks the iCub to move the "cup". Because the robot does not know the object "cup", the robot is asking which object it is. Then, the human points on the object / moves it around to increase the salience. The object name is updated in the OPC, and the object is moved. Now, because the robot wants to understand the world but still doesn't know the name of the second object, the robot is asking the human for the name. Then, we are back to subtask 8.1.

Subtask 8.3: But what if some objects are known (and others not)?

Here, the idea is the same as in subtask 8.2, except that one of the objects the human places on the table is known (the "banana"). The human is asking the "cup" to be moved, and because the first object is a "banana", the robot infers that the second object is the "cup". So the object is renamed in the OPC, and then the "cup" is moved as in subtask 8.2.

Task 9: crawling on iCub/Coman

Contributors: S. Ivaldi, D. Goepp, F. Nori