Difference between revisions of "ICub Cognitive Architecture"

From Wiki for iCub and Friends
Jump to: navigation, search
(Explanation of the iCub Cognitive Architecture and its relationship to the software architecture)
 
(Updated modifications section)
Line 1: Line 1:
 
 
{{lowercase|iCub Cognitive Architecture}}
 
{{lowercase|iCub Cognitive Architecture}}
 
__NOTOC__  
 
__NOTOC__  
Line 9: Line 8:
 
The immediate purpose in developing the software architecture is to create a core software infrastructure for the iCub so that it will be able to exhibit a set of target behaviours for an [[Experimental Investigation 1]].
 
The immediate purpose in developing the software architecture is to create a core software infrastructure for the iCub so that it will be able to exhibit a set of target behaviours for an [[Experimental Investigation 1]].
  
[[Image:iCub_cognitive_architecture_v1.0.jpg]]
+
[[Image:icub_cognitive_architecture_v1.0.jpg]]
  
  
 
==Modifications==
 
==Modifications==
* Removed the <code>tracker</code>
+
* Removed the <code>tracker</code> (should be handled by attention/salience sub-system)
 +
* Removed the <code>face localization</code> (should be handled by attention/salience sub-system)
 +
* Removed the <code>hand localization</code> (should be handled by attention/salience sub-system)
 +
* Removed the <code>localization</code> (should be handled by salience module)
 +
* Removed the <code>attention selection</code>  
 
* Added <code>Exogenous Salience</code> and <code>Endogenous Salience</code>
 
* Added <code>Exogenous Salience</code> and <code>Endogenous Salience</code>
 
* Added <code>Locomotion</code>  
 
* Added <code>Locomotion</code>  
 +
* Added <code>Matching</code>
 +
* Added <code>Auto-associative memory</code>
 +
* Added <code>Hetero-associative episodic memory</code>
 +
* Added <code>Affective state</code>
 +
* Added <code>Action selection</code>
  
 
+
==Notes==
==Observations==
 
* attentionSelection is likely to expand (and/or sub-divide) into, first, <code>automaticActionSelection</code>, and then to <code>actionSelection</code>
 
* <code>egoSphere</code> is likely to expand (and/or sub-divide) significantly in the future. First, it might become a full expression of peripersonal space, encompassing proprioceptive and exterioceptive perception in a temporally stable fashion. This stability would be maintained despite re-orientation by the iCub. Second, it might then become some form of [[#Notes | allocentric>]] mechanism, effected possibly by some mutual association of many egocentric representations.  This allocentric mechanism would be stable despite movement of the iCub around its environment.
 
 
* The <code>controlGaze -> salience -> attentionSelection</code> circuit is a fast retinotopic circuit. It is one of the circuits by which motoric state modulates attentional capacity.
 
* The <code>controlGaze -> salience -> attentionSelection</code> circuit is a fast retinotopic circuit. It is one of the circuits by which motoric state modulates attentional capacity.
 
* <code>soundLocalization</code> at present implies simply a binaural localization of the direction of arrival of a sound, expressed in head-centric spherical coordinates.  It should ultimately effect some sensori-motor capability that acts to re-orient the iCub towards a localized sound.
 
* <code>soundLocalization</code> at present implies simply a binaural localization of the direction of arrival of a sound, expressed in head-centric spherical coordinates.  It should ultimately effect some sensori-motor capability that acts to re-orient the iCub towards a localized sound.
Line 26: Line 31:
 
* <code>Reaching -> salience -> egoSphere -> attentionSelection -> controGaze</code> is the primary circuit for achieving visually-guided reaching.
 
* <code>Reaching -> salience -> egoSphere -> attentionSelection -> controGaze</code> is the primary circuit for achieving visually-guided reaching.
 
* <code>visualSalience</code> and <code>attentionSelection</code> are tightly-coupled.
 
* <code>visualSalience</code> and <code>attentionSelection</code> are tightly-coupled.
 
+
   
==Notes==
 
''Timbre'', in the sense of time-frequency characteristics of a signal, typically focussing on the pattern of growth and decay of harmonics
 
 
 
''Allocentric'', in the sense of addressing aspects of the iCub’s sensory world that have been experienced but are not presently within its sensory compass.
 
 
 
==Open Issues==
 
At some point, we need to figure out what segmentation means, where it fits in this architecture, and how it is effected.
 
 
 
The same applies for  object recognition.
 
  
 
==Links==
 
==Links==

Revision as of 18:33, 4 May 2009

The correct title of this article is iCub Cognitive Architecture. The initial letter is shown capitalized due to technical restrictions.


The iCub cognitive architecture is the result of a detailed design process founded on the developmental psychology and neurophysiology of humans so that it encapsulates what is currently known about the neuroscience of action, perception, and cognition. This process and the final outcome is documented in Deliverable D2.1: A Roadmap for the Development of Cognitive Capabilities in Humanoid Robots.

The architecture itself comprises a set of YARP executables, typically connected by YARP ports. Early prototypes were developed at a RobotCub project meeting at the University of Hertfordshire in July 2007 as an exercise in consolidating the software development effort of the project partners. Several subsequent versions were produced at the RobotCub Summer School 2007 VVV '07. These prototypes were developed in parallel with the Roadmap effort mentioned above. These two strands of design effort converged in the cognitive architecture shown below (version 1.0). Previous versions can be accessed via the links at the end of the page.

The immediate purpose in developing the software architecture is to create a core software infrastructure for the iCub so that it will be able to exhibit a set of target behaviours for an Experimental Investigation 1.

Icub cognitive architecture v1.0.jpg


Modifications

  • Removed the tracker (should be handled by attention/salience sub-system)
  • Removed the face localization (should be handled by attention/salience sub-system)
  • Removed the hand localization (should be handled by attention/salience sub-system)
  • Removed the localization (should be handled by salience module)
  • Removed the attention selection
  • Added Exogenous Salience and Endogenous Salience
  • Added Locomotion
  • Added Matching
  • Added Auto-associative memory
  • Added Hetero-associative episodic memory
  • Added Affective state
  • Added Action selection

Notes

  • The controlGaze -> salience -> attentionSelection circuit is a fast retinotopic circuit. It is one of the circuits by which motoric state modulates attentional capacity.
  • soundLocalization at present implies simply a binaural localization of the direction of arrival of a sound, expressed in head-centric spherical coordinates. It should ultimately effect some sensori-motor capability that acts to re-orient the iCub towards a localized sound.
  • faceLocalization would ideally be implemented as a ‘three blob’ detection; at present it is intended to utilize an OpenCV face detector for the immediate future.
  • Reaching -> salience -> egoSphere -> attentionSelection -> controGaze is the primary circuit for achieving visually-guided reaching.
  • visualSalience and attentionSelection are tightly-coupled.


Links

The following links are to early versions of the iCub "software architecture", a design for an iCub application (i.e. a set of YARP modules) that approximated some of the elementary aspects of the iCub cognitive architecture which now supercedes them. These early version have all now been deprecated, as has the title "software architecture" in this context. Software Architecture now refers, as it originally did, to the YARP system.