Difference between revisions of "Abstract-lourakis"

From Wiki for iCub and Friends
Jump to: navigation, search
(Created page with "Pose estimation for textured and textureless objects Dr. Manolis Lourakis Foundation for Research and Technology - Hellas, Greece Determining the 3D pose (i.e., position and...")
 
 
Line 1: Line 1:
Pose estimation for textured and textureless objects  
+
== Pose estimation for textured and textureless objects ==
 
Dr. Manolis Lourakis
 
Dr. Manolis Lourakis
 +
 
Foundation for Research and Technology - Hellas, Greece
 
Foundation for Research and Technology - Hellas, Greece
  
 
Determining the 3D pose (i.e., position and orientation) of objects from images is a common requirement for vision systems applied to areas such as robotic manipulation, tracking, augmented reality, tangible interfaces, etc. Such systems should operate reliably in dynamic and unknown environments, delivering accurate object pose estimates despite any background clutter or variations in the appearance of objects due to changes in viewing position, illumination and occlusions. The most effective strategy for dealing with such challenges is to proceed according to the model-based paradigm, which involves building 3D models of objects and then determining object poses by fitting their models to images.
 
Determining the 3D pose (i.e., position and orientation) of objects from images is a common requirement for vision systems applied to areas such as robotic manipulation, tracking, augmented reality, tangible interfaces, etc. Such systems should operate reliably in dynamic and unknown environments, delivering accurate object pose estimates despite any background clutter or variations in the appearance of objects due to changes in viewing position, illumination and occlusions. The most effective strategy for dealing with such challenges is to proceed according to the model-based paradigm, which involves building 3D models of objects and then determining object poses by fitting their models to images.
 
The talk will describe techniques for estimating the pose of rigid objects from intensity and depth images. Ordinary intensity images are suitable for use when the objects of interest are textured. Object models in this case consist of sparse sets of 3D points and their associated local image features. When object surfaces lack texture, depth images provide information about object structure and 3D meshes or point clouds are used as object models.
 
The talk will describe techniques for estimating the pose of rigid objects from intensity and depth images. Ordinary intensity images are suitable for use when the objects of interest are textured. Object models in this case consist of sparse sets of 3D points and their associated local image features. When object surfaces lack texture, depth images provide information about object structure and 3D meshes or point clouds are used as object models.

Latest revision as of 22:18, 20 July 2014

Pose estimation for textured and textureless objects

Dr. Manolis Lourakis

Foundation for Research and Technology - Hellas, Greece

Determining the 3D pose (i.e., position and orientation) of objects from images is a common requirement for vision systems applied to areas such as robotic manipulation, tracking, augmented reality, tangible interfaces, etc. Such systems should operate reliably in dynamic and unknown environments, delivering accurate object pose estimates despite any background clutter or variations in the appearance of objects due to changes in viewing position, illumination and occlusions. The most effective strategy for dealing with such challenges is to proceed according to the model-based paradigm, which involves building 3D models of objects and then determining object poses by fitting their models to images. The talk will describe techniques for estimating the pose of rigid objects from intensity and depth images. Ordinary intensity images are suitable for use when the objects of interest are textured. Object models in this case consist of sparse sets of 3D points and their associated local image features. When object surfaces lack texture, depth images provide information about object structure and 3D meshes or point clouds are used as object models.