Multimedia  

 

Volume 25 Issue 12 - Publication Date: 1 December 2006
 
A Generative Model of Terrain for Autonomous Navigation in Vegetation
 
C. Wellington Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA, A. Courville Department of Computer Science and Operations Research, University of Montreal, Montreal, Quebec H3C 3J7, USA and A. Stentz Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
 
Current approaches to off-road autonomous navigation are often limited by their ability to build a terrain model from sensor data. Available sensors make very indirect measurements of quantities of interest such as the supporting ground height and the location of obstacles, especially in domains where vegetation may hide the ground surface or partially obscure obstacles. A generative, probabilistic terrain model is introduced that exploits natural structure found in off-road environments to constrain the problem and use ambiguous sensor data more effectively. The model includes two Markov random fields that encode the assumptions that ground heights smoothly vary and terrain classes tend to cluster. The model also includes a latent variable that encodes the assumption that vegetation of a single type has a similar height. The model parameters can be trained by simply driving through representative terrain. Results from a number of challenging test scenarios in an agricultural domain reveal that exploiting the 3D structure inherent in outdoor domains significantly improves ground estimates and obstacle detection accuracy, and allows the system to infer the supporting ground surface even when it is hidden under dense vegetation.
 
Multimedia Key
= Video = Data = Code = Image
 
Extension
Type
Description
1
Example 1: Person in tall vegetation example showing sensor data and spatial model output. (13.2 MB) Mp4
Movie Description
Movie of a person in tall dense vegetation (see section 9.2 in paper)
This movie shows the input data and spatial model output for a test case where the tractor drives through tall dense vegetation next to a person hidden in camouflage and a small dirt mound. The spatial model output shown was computed in real time. The movie shows the same test sequence repeatedly with images from the tractor on the left and the following different types of information shown on the right:
- Lidar data tagged with color data
- Lidar data tagged with infrared data, showing the hot person and dirt mound (the data is smeared somewhat due to misregistration)
- Spatial model output, showing classification (red = obstacle, gray = ground, light green = tall yellow weeds, dark green = low green vegetation) and inferred ground height under the weeds
- Spatial model output from a different point of view to see the entire output
- Spatial model output comparing predictions of vegetation height with predictions of ground height
The person and the dirt mound both have a high temperature, and the algorithm initially classifies the dirt mound as an obstacle on top of the ground, but then inference in the model finds the more likely explanation of a rise in the ground. The system also infers the ground height underneath the dense vegetation, which allows it to correctly classify the person as an obstacle.
 
Return to Contents