VVV08 Visual Attentionators
our powerful port you want to connect to to get the estimated position of the ball:<br\> /PF3DTracker/dataOut
Our group is composed by: Tahir, Nicola, Harold, Matteo, Katrin and Jonas.
We are working together pursuing various goals (listed along with the names of the guys who are interested the most in reaching them):
Goal #1: we are testing the attention system in the robot simulator, connecting the saliency, egoSphere, controlGaze, attention and attractionSelection modules. We would like to render images grabbed by a real camera into the simulated world, in order to test the whole attention system (Katrin, Harold).<br\> Goal achieved: it is now possible to stream images onto a simulated screen, so the simulated iCub can watch tv! Special thanks to Paul and Vadim.
Goal #2: we are testing a color-based tracking algorithm in the real robot (Nicola).
Goal #3: we are trying to develop a new version of the face-detection-based saliency module, taking into account information that is currently discarded by the opencv face detector i.e. the stage of the cascade of classifiers at which one hypothesis is labeled as "not a face" (Matteo).
Goal #4: we are interested in detecting the 3D position of objects, in order to provide data for grasping (Tahir).
Goal #5: we would like to port/merge some existing code to detect and track balls in 3D, for grasping purposes (Katrin, Harold, Nicola, Matteo).<br\> ToDo:<br\>
- try to make the tracker faster.
- check the robustness against partial occlusions (maybe include some pictures of the hands of the iCub in the color model).<br\>
Goal #6: Implement a cylindrical projection wall in the simulator. Use it for projecting example scenes and test attention system accuracy and performance (Jonas, Vadim)