This website uses browsing/session and functional cookies to ensure you get the best experience. Learn More

ICub Cognitive Architecture

From Wiki for iCub and Friends
Jump to: navigation, search
The correct title of this article is iCub Cognitive Architecture. The initial letter is shown capitalized due to technical restrictions.

Contents


The Evolution of the Architecture

The iCub cognitive architecture is the result of a detailed design process founded on the developmental psychology and neurophysiology of humans, capturing much of what is known about the neuroscience of action, perception, and cognition. This process and the final outcome is documented in Deliverable D2.1: A Roadmap for the Development of Cognitive Capabilities in Humanoid Robots.


The architecture itself is realized as a set of YARP executables, connected by YARP ports. Early prototypes were developed at a RobotCub project meeting at the University of Hertfordshire in July 2007 as an exercise in consolidating the software development effort of the project partners. Several subsequent versions were produced at the RobotCub Summer School 2007 VVV '07. These prototypes were developed in parallel with the D2.1 Roadmap effort mentioned above. These two strands of design effort converged in the cognitive architecture shown below (Version 0.4). Previous versions can be accessed via the links at the end of the page. VVV '09 addressed the development of the architecture's (auto-associative) episodic and (hetero-associative) procedural memories.


The immediate purpose in developing the cognitive architecture is to create a core software infrastructure for the iCub so that it will be able to exhibit a set of target behaviours for an Empirical Investigations.

Icub cognitive architecture v0.4.jpg

Figure 1: iCub cognitive architecture, version 0.4.

Differences from previous version

  • Removed the tracker (should be handled by attention/salience sub-system)
  • Removed the face localization (should be handled by attention/salience sub-system)
  • Removed the hand localization (should be handled by attention/salience sub-system)
  • Removed the sound localization (should be handled by salience module)
  • Removed the attention selection
  • Added Exogenous Salience and Endogenous Salience
  • Added Locomotion
  • Added Matching
  • Added Auto-associative episodic memory
  • Added Hetero-associative procedural memory
  • Added Affective state
  • Added Action selection

Notes

  • Gaze implies 7 DoF: head and eyes
  • Locomotion paradigm: “go where you are looking”
  • Reaching paradigm: “reach where you are looking”
  • Endogenous and exogenous salience implies salience based on internal and external events, respectively
  • Gaze, reaching, and locomotion motor activities condition endogenous salience: i.e. motor states condition attention
  • Sensory inputs condition exogenous salience: i.e. attention conditions motor states
  • Episodic memory is memory of autobiographical events. Initially, this is purely visual and implemented as an auto-associative memory. Later it will be multimodal and will include sound as well as associated emotions. It will then have to be implemented as a hetero-associative network of unimodal auto-associative memories.
  • Episodic memory storage is conditioned by poor matching and high salience
  • Procedural memory is defined to mean perception-action event sequence
  • Procedural memory recall:
    • Event A & Event D inputs recall sequence of intermediate events
    • Event A input recalls Event B (subsequent event)
  • Affective state is a competitive network of three motives:
    • Curiosity (exogenous salience prevalent)
    • Experimentation (endogenous salience prevalent)
    • Social engagement (exogenous and endogenous salience balanced)
  • Action selection is not a winner-take-all process: one or more actions are disinhibited
  • The developmental drive is to construct a procedural memory that improves prediction

A more detailed description of the behaviour of each module and circuit in this architecture will be added in due course (both here on the iCub wiki and in Deliverable D2.1: A Roadmap for the Development of Cognitive Capabilities in Humanoid Robots).


YARP Module Implementation

The iCub cognitive architecture is implemented as a set of iCub YARP modules, as follows.


Icub cognitive architecture modules B.jpg

Icub cognitive architecture modules A.jpg


Figure 2: iCub cognitive architecture implementation with YARP modules.

Links

Cognitive Architecture Modules Documentation



Past Versions of the Cognitive Architecture


Past Versions of the Software Architecture

The following links are to early versions of the iCub "software architecture", a design for an iCub application (i.e. a set of YARP modules) that approximated some of the elementary aspects of the iCub cognitive architecture which now supercedes them. These early versions have all now been deprecated, as has the title "software architecture" in this context.

The term Software Architecture now refers, as it originally did, to the YARP system.


Summer School (VVV '09) Cognitive Architecture Group

Personal tools
Namespaces

Variants
Actions
Navigation
Print/export
Toolbox