This website uses browsing/session and functional cookies to ensure you get the best experience. Learn More

IROS 2011 Demonstrations

From Wiki for iCub and Friends
Jump to: navigation, search
Icub-demo1.jpg
Icub-demo2.jpg
Icub-demo3.jpg

Contents


LogoEU.png The iCub project in general and the projects represented below in particular are kindly supported by the Cognitive Systems and Robotics program of the European Commission
RobotcubLogo.jpg The ROBOTCUB project ItalkLogo.jpg The ITALK project ChrisLogo.jpg The CHRIS project RoboskinLogo.jpg The ROBOSKIN project EfaaLogo.jpg The EFAA project


Official demonstrations

iCub Learning a Cooperative Game through Interaction with his Human Partner

Authors: Stephane Lallee (2), Ugo Pattacini (1), Lorenzo Natale (1), Giorgio Metta (1), Peter Ford Dominey (2)

(1) Robotics, Brain and Cognitive Sciences dept., Istituto Italiano di Tecnologia, Italy

(2) Stem Cell and Brain Research Institute, Bron, France

Abstract: while robots become increasingly capable of perceiving and acting on their environment, they often do so only by following orders of their human users. Although this master/slave relation is interesting to produce automated behaviors, more fruitful behaviors could emerge from cooperative or competitive interactions among the robot and his human partner. Rules of a game will be taught to the robot in the form of a shared plan. The robot will then use is perceptual and motor skills to play it with his human partner. The game rules are arbitrary and allow learning of different games using natural interaction programming.

Acknowledgments: this show will demonstrate research funded by the European Commission under the Robotics and Cognitive Systems, ICT Project CHRIS (FP7-215805).

Aquila - The Cognitive Robotics Toolkit

Authors: Martin Peniak, Anthony F. Morse -- University of Plymouth, UK

Abstract: A live demonstration of Aquila, an open source cognitive robotics software toolkit for the iCub robot. Aquila has already been used for a number of cognitive robotics experiments including; modeling sensorimotor learning in child development, complex action acquisition, and even teleoperation via a Kinect camera. All of these modules will be demonstrated in an interactive setting, inviting anyone interested to take part and interact with the robot themselves. We will be on hand to discuss anything from the science behind our experiments, to how to use Aquila to create your own experiments.

Acknowledgments: this show will demonstrate research funded by the European Commission under the Robotics and Cognitive Systems, ICT Project ITALK (FP7-214668).

Unofficial demonstrations

Grasping and acting on the iCub

Authors: V. Tikhanoff (1), C. Ciliberto (1), U. Pattacini (1), S. Lallee (2), L. Natale (1), G. Metta (1), Peter Ford Dominey (2)

(1) Robotics, Brain and Cognitive Sciences dept., Istituto Italiano di Tecnologia, Italy

(2) Stem Cell and Brain Research Institute, Bron, France

Abstract: Humanoid robots are becoming increasingly similar to humans and, to a certain extent, are able to imitate human behavior. One of the great challenges of interacting with a humanoid robot is developing cognitive capabilities for the robots with an interface that allows human to collaborate, communicate and teach the robots as naturally and efficiently as they would with other human beings. The integrated setup is a cognitive system that interacts with humans, explore and derive information about the world. This permits the robot to learn from humans or discover and learn autonomously through manipulation.

Skin and compliant control on the iCub

Authors: Andrea Del Prete, Lorenzo Natale, Francesco Nori and Marco Randazzo -- Robotics, Brain and Cognitive Sciences dept., Istituto Italiano di Tecnologia, Italy

Abstract: Recently, the necessity of building robots capable of interacting with the environment has brought forward the requirement of force regulation. In this demonstration, we will present an active joint-torque regulation scheme that relies on three different sensors: an inertial sensor, a distributed tactile sensor and multiple force/torque sensors embedded in the robot structure.

Acknowledgments: this show will demonstrate research funded by the European Commission under the Robotics and Cognitive Systems, ICT Projects CHRIS (FP7-215805) and ROBOSKIN (FP7-231500).

Crawling demonstration from the EU project RobotCub

Authors: Marco Randazzo (1), Francesco Nori (1), Cecilia Tapia (1), Auke Ijspeert (2), Sarah Degallier (2), Ludovic Righetti (3),

(1) Robotics, Brain and Cognitive Sciences dept., Istituto Italiano di Tecnologia, Italy

(2) Ecole Polytechnique Federal de Lausanne, Switzerland

(3) Computational Learning and Motor Control Laboratory, Univesity of Southern California

Abstract: Nowadays, very few humanoid robots perform complex whole body motions and even fewer perform force-controlled movements. Robotic locomotion, in particular, is a notably challenging task, since it involves both the problems of controlling the interaction with the surrounding environment and the coordination of multiple robot parts in order to achieve stability. In this demonstration we will show how force control can be exploited to make the iCub compliantly interacting with the ground, while central pattern generators will be used to produce stable, multidimensional rhythmic patterns during crawling.

Motor Babbling for the Autonomous Learning of Hand-Eye Coordination

Authors: Lydia Majure, Beckman Institute, University of Illinois at Urbana-Champaign

Abstract: Human infants are born with very limited knowledge of their own body dynamics. Through a combination of random motor exploration (babbling) and primitive reflexes, a training set is generated for learning a mapping between motor commands and a body-centered representation of space. Similar to human development, motor babbling can be used on humanoid robot to learn a multi-modal topological map of space for hand-eye coordination and later motor development. Here, this method is used for the iCub robot to reach for objects of interest without any explicit knowledge of its own kinematics.

Acknowledgements: Lincoln Laboratory and NRL

A neural architecture for multi-modal integration and top-down attention in robot Pong: Distributed Adaptive Control

Authors: Zenon Mathews (1), Andre Luzivotto (1), Alex Escuredo (1), Marti Sanchez (1), and Paul F.M.J. Verschure (1) (2)

(1) SPECS, University of Pompeu Fabra

(2) ICREA Barcelona, Spain.

Abstract: Everyday interactions between humans consist of several real-world objects to be attended to simultaneously. Moreover, most interactions involve multiple coordinated motor actions exceuted in a parallel fashion. Such rich natural interactions demand the ability to share perceptual, cognitive and motor resources in a task and load dependent fashion. Even though anticipations of future sensory events are thought to be relevant for sharing limited resources, how they affect perception itself is still subject of ongoing research. We have developed a psychophysical paradigm that demonstrates that humans employ parallel cognitive load dependent anticipations at different levels of perception. This seems to be a useful strategy to share limited perceptual and motor resources making use of anticipations of future sensory stimuli. We provide a mathematical model capable of explaining such parallel anticipations for sharing limited resources. Here we employ this model on the iCub humanoid robot with the goal of enabling the robot to maintain rich interactions with its human counterparts. The cognitive resources of the robot are usually easily scalable by adding more nodes on the cluster used by the robot. Whereas the perceptual and motor resources of the robot are more limited. The usage of our model enables the robot to share its perceptual and motor resources in a multiple goal interaction with humans. To this end, we designed the following “hockey player” interaction scenario. The robot takes the role of a hockey goal keeper, who has to deal with multiple balls thrown to the goal at the same time. This task involves sharing the gaze resources to detect multiple moving balls, anticipate their trajectories and trigger multiple motor actions so that the most number of balls can be stopped before entering the goal. Our task can be easily scaled by adding more perceptual, cognitive or motor load, making the interaction scenario even richer. Moreover, as we are using a general model, the capabilities for sharing limited resources are readily available for other interaction scenarios.

Acknowledgements: European Commission ICT Project EFAA (FP7-270490), eSMC, GOALLEADER, FAA.

Schedule

Monday Sep 26 Tuesday Sep 27 Wednesday Sep 28 Thursday Sep 29
9-10 Set up Demo Cooperative Game
10-11 Set up Demo Cooperative Game Demo Motor Babbling ... Demo Skin & Grasping and Acting
11-12 Set up Demo Cooperative Game Demo Neural Architecture ... Demo Crawling
12-13 Set up Demo Grasping and Acting Demo Crawling
13-14 Set up Demo Aquila Lunch
14-15 Set up Demo Aquila Demo Motor Babbling ... Dismantle
15-16 Set up Demo Aquila Demo Neural Architecture ... Dismantle
16-17 Set up Demo Skin Demo Skin & Grasping and Acting Dismantle
17-18 Set up == == Dismantle
18-19 Set up == == Dismantle

Rehearsal Schedule (for organizers)

Personal tools
Namespaces

Variants
Actions
Navigation
Print/export
Toolbox