- Andrea Biasiucci
- Marco Creatura
Starting from Paul's tutorial code, we used it to move the eyes of the robot in the simulator in order to simulate the focusing process (the eyes try to put in the center of the retina interesting objects). As a first case we considered "interesting" all the blue objects, basing on a really simple rule of channels level; we created a screen in front of the robot (thanks to Katrine for the help!) modifying the world and showed him the images caught on a webcam. We finally applied a saliency algorithm for face detection and introduced a Proportional controller for the eyes movement; we also tested it on the icub head and it worked well, we would need to improve the robustness of the detection algorithm now, and maybe implement the neck-eye coordination to have a more natural view. The school is making its job!
- Simple color recognition rule
- Simple motor control of the eyes depending on the position of an interesting object
- Applying a real scene in the simulator to check the motor behavior of the robot
- Improve the color detection algorithm with a face detection procedure
- Test the algorithm with the real head
- Try to make some coordination between neck movements and eyes movements
- Improve robustness (and possibly speed) of the face detection algorithm
- Basic Image Processing - read images from the simulator or real robot, find an object, and move the head motors to fixate that object. The image processing and motor control involved aren't really worthy of those fancy names, but it is a starting point.