Emergence of Communication in iCub through Sensorimotor and Social Interaction
Despite its anthropomorphic design, iCub has a different sensorimotor system and hence a different view of the world. This project aims to enable iCub to interact with humans through a shared worldview. Specifically, we want to study how iCub can learn and use verbal labels for concepts (nouns) and actions (verbs) that co-vary with objects and actions to facilitate its interaction. We will develop a model that would enable the sensorimotor grounding of robotic conceptualization and language use in iCub. The recent understandings of mirror and canonical neurons and affordances will be used to develop and implement a neurocomputational model.
Collaborations and support
This project will be done in connection with the ROSSI project, funded under the Challenge 2, Cognitive Systems, interaction robotics call of FP7, sharing its main scientific objectives:
- to provide new neuroscientific/psychological insights into the sensorimotor grounding of human conceptualization and language use, in particular the role of canonical and mirror neurons as underlying the use of nouns and verbs,
- to develop novel approaches to sensorimotor grounding of robotic conceptualization and language use(more precisely, verbal labeling of objects and actions), based on the insights gained under (1) and richer computational/robotic models of the underlying neural mechanisms.
We will collaborate with Dr. Erhan Oztop from Dept. of Humanoid Robotics and Computational Neuroscience, ATR International, Japan. In our collaboration, we will develop a neurocomputational model that would enable the sensorimotor grounding of robot conceptualization and language use in humanoid robots.
The technical support for the maintenance and development of the iCub will be provided through:
- Dept. of Computer Engineering, which will provide technician and equipment support for the development and maintenance of the electronics and computer hardware related to iCub.
- BILTIR (METU CAD/CAM/Robotics Center), which will provide technician and machine support for the mechanical parts.
Baris Akgun, Tahir Bilal (from METU) and Serge Thill (from U.of.Skovde) attended the iCub summer school (VVV8).
Barış worked mostly on kinematics and grasping
- Implemented a pseudoinverse jacobian algorithm to inverse kinematics problem on the real robot for grasping.
- Next, he learned to use the kinematics library KDL and reimplemented his solution using that library.
Snapshots from the grasping of a ball by iCub can be seen below:
Tahir has used salience module of iCub's attention system for face tracking, which in turn uses openCV's face detection algorithm. The overall cycle is as follows. First, salience module is started while only the face detection filter is turned on. Next, at each step, the mean of the white pixels in the salience module's output image is calculated. Lastly iCub head's yaw and pitch joints are given velocities in accordance with the vertical and horizontal distances between the mean of the white pixels and the image center.
Serge has modified the iCub simulator so that different iCubs, each simulated in their own environment, can nonetheless observe each other. The system works by adding cameras to each simulator in the exact position where the observing iCub's eyes would be located. Images from these cameras and from the observing iCub's actual eyes are then overlaid to produce the final view.
Having each iCub running in their own simulator (potentially running on different computers) overcomes fundamental computational limitations of the simulator itself. Since all iCubs can be independently programmed and controlled, the resulting system is useful for simulating e.g. learning by observation and imitation.
One iCub observing another
Left, center: Two iCubs (an observer and a demonstrator) each simulated in their own environment. Right: Demonstrator made visible to the observer through simulated eyes in the demonstrator's environment.