Volume 22 Issue 10/11- Publication Date: 1 October 2003
Adapting the Sample Size in Particle Filters Through KLD-Sampling
Dieter Fox Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA

Over the past few years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error using the Kullback–Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.

Multimedia Key
= Video = Data = Code = Image
Example One: Video Global localization using KLD-sampling: shown is a sequence of sample sets during global localization of a Pioneer ZDX robot using the robot’s eight sonar sensors. The number of samples is shown in the lower left corner of the animation (maximum was set to 40,000). The timing of the animation is proportional to the approximate update times for the particle filter (real updates are more than two times faster). (10.8MB)
Example Two: Video Same as extension 1. This time, the robot’s laser range-finder is used for localization. The parameters for adoptive sampling are the same as for the sonar sensor data in extension 1. (3.3MB)
Return to Contents