Multimedia  

 

Volume 22 Issue 10/11- Publication Date: 1 October 2003
 
Improving Imaged-Based Visual Servoing with Three-Dimensional Features
 
E. Cervera, A.P. del Pobil Robotic Intelligence Laboratory, Jaume-I University, 12071 Castelló, Spain, F. Berry and P. Martinet LASMEA - GRAVIR, Blaise Pascal University of Clermont-Ferrand, 63177 Aubiére - Cedex, France
 

Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image features going out of the visual field of the cameras. On the other hand, image-based visual servoing has been found generally satisfactory and robust in the presence of camera and hand-eye calibration errors. However, in some cases, singularities and local minima may arise, and the robot can go further from its joint limits. This paper is a step towards the synthesis of both approaches with their particular advantages, i.e., the trajectory of the camera motion is predictable and the image features remain in the field of view of the camera. The basis is the introduction of three-dimensional information in the feature vector. Point depth and object pose produce useful behavior in the control of the camera. Using the task-function approach, we demonstrate the relationship between the velocity screw of the camera and the current and desired poses of the object in the camera frame. Camera calibration is assumed, at least coarsely. Experimental results on real robotic platforms illustrate the presented approach.

 
Return to Contents