Serge - Multiple iCub person

From Wiki for iCub and Friends
Jump to: navigation, search

Basic idea

For a variety of tasks, it is useful to have more than one agent (in this case Mr iCub) available in the world. The specific interest here is imitation, for which there would be two iCubs: one demonstrating movements and another one observing these movements. The observing robot could then learn certain tasks by observing the demonstrator performing them. In a perfect world more or less the same code should also be able to recognise and learn from actions of human people when implemented in the real robot, but this will simply have to wait until some point in the more distant future.

Fundamental problem

Sounds easy but the simulator is already pretty CPU-intensive as it is and everyone who understands it seems to agree that adding another iCub into the same simulation is a Very Bad Idea (tm). In the best case, it will result in a fantastically slow (i.e. useless) simulation and a computer trying to run it might well die a horrible flaming death.

Simple solution

[Updated] This section previously had the solution of recording visual streams from an iCub in demonstrator mode and playing those back at a later stage into the visual stream of the same iCub now in observer mode. At a later stage it might then have been possible to extend the code to actually do the same thing live with different iCubs living in different simulators running potentially on different laptops. However, after a few beers, it became clear to me that it should be the other way around - the first, easier, step is actually to connect different iCubs in different simulators (which is also the more powerful solution as head movements from the observer iCub will no longer cause a problem). Playing/Recording visual streams then becomes a trivial but more limited extension useful mainly if there is a significant lack of computing power.

The basic idea is that each simulator will contain one iCub as at the moment plus two additional viewpoints (like /cam/left and /cam/right) placed exactly where a second iCub's eyes would be. These viewpoints are paired with the eyes of a second iCub in a different simulator, that is to say the iCub actually sees what is observable from those viewpoints with its own body and objects in its own world overlaid when visible. If the head position of the observing iCub changes, this change is also applied to the viewpoints in the first simulator.

Necessary changes in the simulator

There are a few changes that need to be made in the simulator for this to work (which is basically the plan for day 5):

  • Switch off the textures of the sky and the floor and replace them with uniform colours. This makes it much easier to detect the iCub's body and objects for overlaying purposes [Done]
  • Fix the lighting in the simulator [Done, became undone, now done again]
  • Allow the user to specify when starting the simulator what the simulator's main port is called (so it becomes easy to talk to multiple simulators at the same time in yarp) [Done]
  • Allow the user to specify which configuration file to use initially (so different iCubs can have different properties and potentially the simulator name could also be specified here). Update: This is simply too messy and since there is a workaround (start the different simulators in different directories each containing an iCub_parts_activation.ini) it will not happen.[Abandoned]
  • Add two more cameras and ports for them [Done]
  • Allow one observing iCub to modify the camera positions in the demonstrator's simulator realistically based on its own head position [Finally done!]

Visual processing

Now that the simulator has all the required changes, constructing the actual view of the observing iCub is relatively trivial: what it sees in its own simulator has to be laid over what it sees in the demonstrator simulator and the resulting composite images (one per eye) is basically what we want.

How it works and simple example

We start two simulators. These will have two different names (/observer and /demonstrator but it could be anything you want). /demonstrator is then told to demonstrate while /observer is told to observe /demonstrator. This opens different ports in each simulator: /observer receives one additional port for communicating its eye positions to /demonstrator while /demonstrator can read this information from one port and then output what the observing iCub would see if it was facing the demonstrating iCub. The actual vision of the observing iCub can then be constructed from what it sees in its own simulator and what it sees in the demonstrator's simulator. Since this sounds confusing, I've drawn a picture illustrating the relationship between the two simulators and how the vision of the observer comes about:

Demo obs layout.png

Initially, with the two simulators started, we will thus have a situation like this (with the view of the observing iCub on top, the demonstrating iCub at the bottom and the view (left and right eyes) of the observer in between:

Obs dem 1.png

Next, we can tell the demonstrator to move for instance it's left arm and the observer will see this in real time:

Obs dem 2.png

Finally, the observer will imitate the action (it's fake here - we tell it to move its arm, but in the near future this will happen for real):

Obs dem 3.png

The interesting thing to observe is that the observer can see both its own arm and the demonstrating iCub even though they live in different simulators (and potentially on different computers)!

Shiny features

  • All movements of the observer (including turning/tilting the head/eyes) will result in correct transformations of the view (including that of the demonstrator).
  • The position of the observer with respect to the demonstrator can be changed (currently only on a closer/further away axis, but left/right movements and rotation will follow).
  • Both iCubs can observe each other, there is no limitation to the role of either a demonstrator or an observer but not both.
  • The two simulators and the program required for constructing the observers' view seem to run pretty happily on my laptop. While this is obviously not cheap computationally, it thus seems to be something at least a dual core CPU can handle.
  • In theory, it is possible to have multiple iCubs standing around all observing each other, but this does require some further changes in the code.

Current limitations

  • The world (ground and sky) have to be a uniform colour for the overlaying of the different pictures to work. This limitation will be removed pretty soon.
  • Anything in the observer's simulator (i.e. objects) are automatically assumed to be closer to the observer than the demonstrator. This should be fine in most situations (e.g. two iCubs or two iCubs with a table each), but it would obviously be neater if the objects in the composite view were actually drawn based on their distance from the viewpoint.
  • If objects move in both simulators at once, the images making up the composite view may be transiently slightly out of sync. Again, not a big problem but something to keep in mind and which can probably be fixed by taking into account the timing of the different images.

Conclusions

It is now possible to have iCubs observing each other, although they won't be able to touch each other. Options for recording/playback of visual streams rather than live views of different simulators can be (hopefully) trivially added if needed but is not going to happen just now.