Multimedia  

 

Volume 23 Issue 12 - Publication Date: 1 December 2004
 
Motion Estimation from Image and Inertial Measurements
 
Dennis Strelow and Sanjiv Singh Carnegie Mellon University, Pittsburgh, PA 15213, USA
 
Cameras and inertial sensors are each good candidates for autonomous vehicle navigation, modeling from video, and other applications that require six-degrees-of-freedom motion estimation. However, these sensors are also good candidates to be deployed together, since each can be used to resolve the ambiguities in estimated motion that result from using the other modality alone. In this paper, we consider the specific problem of estimating sensor motion and other unknowns from image, gyro, and accelerometer measurements, in environments without known fiducials. This paper targets applications where external positions references such as global positioning are not available, and focuses on the use of small and inexpensive inertial sensors, for applications where weight and cost requirements preclude the use of precision inertial navigation systems.
We present two algorithms for estimating sensor motion from image and inertial measurements. The first algorithm is a batch method, which produces estimates of the sensor motion, scene structure, and other unknowns using measurements from the entire observation sequence simultaneously. The second algorithm recovers sensor motion, scene structure, and other parameters recursively, and is suitable for use with long or ‘infinite’ sequences, in which no feature is always visible.
We evaluate the accuracy of the algorithms and their sensitivity to their estimation parameters using a sequence of four experiments. These experiments focus on cases where estimates from image or inertial measurements alone are poor, on the relative advantage of using inertial measurements and omnidirectional images, and on long sequences in which the percentage of the image sequence in which individual features are visible is low.
 
Multimedia Key
= Video = Data = Code = Image
 
Extension
Type
Description
1
Example One: A video showing the 152 image sequence used in the perspective arm experiment, with tracked features overlaid. (3.0 MB)
2
Example Two: A VRML showing the estimates resulting from the batch image and inertial method in the perspective arm experiment. The camera coordinate system at the time of each image and the estimated three-dimensional positions of each tracked image point feature are included. (65kb)
3
Example Three: Observations (e.g., tracking data) and estimates
from the perspective arm experiment. (107kb)
4
Example Four: A video showing the 152 image sequence used in the first omnidirectional arm experiment, with tracked features overlaid. (3.0 MB)
5
Example Five: A VRML showing the estimates resulting from the batch image and inertial method in the first omnidirectional arm experiment. The camera coordinate system at the time of each image and the estimated three-dimensional positions of each tracked image point feature are included. (64kb)
6
Example Six: Observations (e.g., tracking data) and estimates from the first omnidirectional arm experiment. (171kb)
7
Example Seven: A video showing the 152 image sequence used in the second omnidirectional arm experiment, with tracked features overlaid. (7.4 MB)
8
Example Eight: A VRML showing the estimates resulting from the batch image and inertial method in the second omnidirectional arm experiment. The camera coordinate system at the time of each image and the estimated three-dimensional positions of each tracked image point feature are included. (64kb)
9
Example Nine: Observations (e.g., tracking data) and estimates from the second omnidirectional arm experiment. (306kb)
10
Example Ten: A video showing the first 200 images from 1430 image sequence used in the perspective crane experiment, with tracked features overlaid. A movie showing the full 1430 image sequence with tracking is available at http://www.cs.cmu.edu/~dstrelow/ijrr. (18.2 MB)
11
Example Eleven: A VRML showing the estimates resulting from the online image and inertial method in the perspective crane experiment. The camera coordinate system at the time of each image and the estimated three-dimensional positions of each tracked image point feature are included. (0.6 MB)
12
Example Twelve: Observations (e.g., tracking data) and estimates from the perspective crane experiment. (4.4 MB)
13
Example Thirteen: A video showing the first 150 images from the 1401 image sequence used in the perspective rover experiment, with tracked features overlaid. A movie showing the full 1401 image sequence with tracking is available at http://www.cs.cmu.edu/~dstrelow/ijrr (25.4 MB)
14
Example Fourteen: A VRML showing the estimates resulting from the online image-only method in the perspective rover experiment. The camera coordinate system at the time of each image and the estimated three-dimensional positions for tracked image point features estimated to be near the camera path. (1.2 MB)
15
Example Fifthteen: Observations (e.g., tracking data) and estimates
from the perspective rover experiment. (12.0 MB)
 
Return to Contents