ICub software architecture

From Wiki for iCub and Friends
Revision as of 16:02, 25 May 2018 by Matteo.brunettini@iit.it (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
The correct title of this article is iCub software architecture. The initial letter is shown capitalized due to technical restrictions.

The page is the last version of the iCub "software architecture", a design for an iCub application (i.e. a set of YARP modules) that approximated some of the elementary aspects of the iCub Cognitive Architecture which now supercedes it. This design, and earlier versions (see Section Links below), have all now been deprecated, as has the title "software architecture" in this context. Software Architecture now refers, as it originally did, to the YARP system.


The iCub software architecture comprises a set of YARP executables, typically connected by YARP ports. The first version was developed at a RobotCub project meeting at the University of Hertfordshire in July 2007 as an exercise in consolidating the software development effort of the project partners. Several subsequent versions were produced at the RobotCub Summer School 2007 VVV '07 and it continues to evolve.

The immediate purpose in developing the software architecture is to create a core software infrastructure for the iCub so that it will be able to exhibit a set of target behaviours for an Experimental Investigation 1.

Our intention is for the software architecture to evolve so that it is compatible with what is known about the neuroscience of action, perception, and cognition. Ideally, it should also evolve in a manner that is also compatible with the iCub cognitive architecture and, vice versa, the cognitive architecture should evolve to be compatible with the software architecture. Ultimately, they should be the same thing.

The last version of the iCub software architecture is shown below. Previous versions can be accessed via the links at the end of the page.

Each version is accompanied by a commentary that describes the changes that were made to previous version in developing the current version.

ICub software architecture v0.4.jpg


Modifications

  • Current salience becomes visualSalience
  • Deleted connection from soundLocalization to attectionSelection
  • Deleted connection from colour to egoSphere
  • Deleted connection from motion to egoSphere
  • Added new auralFiltering module, e.g. timbre, pitch, intensity, ..., as placeholders for future developments
  • Added new auralSalience, as a placeholder for a future development
  • Added connection from auralFiltering to auralSalience
  • Added a connection from auralSalience to egoSphere
  • Added a connection from auralSalience to soundLocalization, as a placeholder for future developments

Observations

  • attentionSelection is likely to expand (and/or sub-divide) into, first, automaticActionSelection, and then to actionSelection
  • egoSphere is likely to expand (and/or sub-divide) significantly in the future. First, it might become a full expression of peripersonal space, encompassing proprioceptive and exterioceptive perception in a temporally stable fashion. This stability would be maintained despite re-orientation by the iCub. Second, it might then become some form of allocentric mechanism, effected possibly by some mutual association of many egocentric representations. This allocentric mechanism would be stable despite movement of the iCub around its environment.
  • The controlGaze -> salience -> attentionSelection circuit is a fast retinotopic circuit. It is one of the circuits by which motoric state modulates attentional capacity.
  • soundLocalization at present implies simply a binaural localization of the direction of arrival of a sound, expressed in head-centric spherical coordinates. It should ultimately effect some sensori-motor capability that acts to re-orient the iCub towards a localized sound.
  • faceLocalization would ideally be implemented as a ‘three blob’ detection; at present it is intended to utilize an OpenCV face detector for the immediate future.
  • Reaching -> salience -> egoSphere -> attentionSelection -> controGaze is the primary circuit for achieving visually-guided reaching.
  • visualSalience and attentionSelection are tightly-coupled.

Notes

Timbre, in the sense of time-frequency characteristics of a signal, typically focussing on the pattern of growth and decay of harmonics

Allocentric, in the sense of addressing aspects of the iCub’s sensory world that have been experienced but are not presently within its sensory compass.

Open Issues

At some point, we need to figure out what segmentation means, where it fits in this architecture, and how it is effected.

The same applies for object recognition.

Links