This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter)
framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera
to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation
framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features'
3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks,
the proposed method discards feature estimates from the extended state vector once they are no longer observed
for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for
online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the
EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to
produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford
campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the
estimation error is ~1.9% of the path length.
This paper presents a pose estimation method based on a 3D camera - the SwissRanger SR4000. The proposed method
estimates the camera's ego-motion by using intensity and range data produced by the camera. It detects the SIFT (Scale-
Invariant Feature Transform) features in one intensity image and match them to that in the next intensity image. The
resulting 3D data point pairs are used to compute the least-square rotation and translation matrices, from which the
attitude and position changes between the two image frames are determined. The method uses feature descriptors to
perform feature matching. It works well with large image motion between two frames without the need of spatial
correlation search. Due to the SR4000's consistent accuracy in depth measurement, the proposed method may achieve a
better pose estimation accuracy than a stereovision-based approach. Another advantage of the proposed method is that
the range data of the SR4000 is complete and therefore can be used for obstacle avoidance/negotiation. This makes it
possible to navigate a mobile robot by using a single perception sensor. In this paper, we will validate the idea of the
pose estimation method and characterize the method's pose estimation performance.
KEYWORDS: Laser range finders, Distance measurement, Mobile robots, Sensors, Calibration, Scanners, Reflectivity, Surface properties, RGB color model, Data modeling
This paper presents a characterization study of the Hokuyo URG-04LX scanning laser rangefinder (LRF). The Hokuyo
LRF is similar in function to the Sick LRF, which has been the de-facto standard range sensor for mobile robot obstacle
avoidance and mapping applications for the last decade. Problems with the Sick LRF are its relatively large size, weight,
and power consumption, allowing its use only on relatively large mobile robots. The Hokuyo LRF is substantially
smaller, lighter, and consumes less power, and is therefore more suitable for small mobile robots. The question is
whether it performs just as well as the Sick LRF in typical mobile robot applications.
In 2002, two of the authors of the present paper published a characterization study of the Sick LRF. For the present
paper we used the exact same test apparatus and test procedures as we did in the 2002 paper, but this time to characterize
the Hokuyo LRF. As a result, we are in the unique position of being able to provide not only a detailed characterization
study of the Hokuyo LRF, but also to compare the Hokuyo LRF with the Sick LRF under identical test conditions.
Among the tested characteristics are sensitivity to a variety of target surface properties and incidence angles, which may
potentially affect the sensing performance. We also discuss the performance of the Hokuyo LRF with regard to the
mixed pixels problem associated with LRFs. Lastly, the present paper provides a calibration model for improving the
accuracy of the Hokuyo LRF.
This paper presents a complete system for autonomous navigation of a mobile robot in urban environments. An urban
environment is defined as one having hard surface and comprising curbs, ramps, and obstacles. The robot is required to
traverse flat ground surfaces and ramps, but avoid curbs and obstacles. The system employs a 2-D laser rangefinder for
terrain mapping. In order to filter erroneous sensor data (mixed pixels and noises), an Extended Kalman Filter (EKF) is
used to segment the laser range data into straight-line segments and isolated points. The isolated points are then
compared with those points at their neighboring straight-line segments to detect discontinuity in received energy (called
reflectivity value in this work). The points exhibiting discontinuity of reflectivity are identified as erroneous data and
removed. A so-called "Polar Traversability Index" measure is proposed to evaluate terrain traversal property. A PTI
dictates the level of difficulty for a robot to move along the corresponding direction. It enables the robot to traverse
wheelchair ramps and avoid curbs when negotiating sidewalks in urban environment. The advantage of using PTI over
the conventional traversability index is that the robot's yaw angle is taken into account when computing the terrain
traversal property at the corresponding direction. This allows the robot to snake up a steep ramp that may be too steep
for the robot to climb if the conventional traversibility index were used. The efficacies of the PTI and the entire system
have been verified by simulation and experiments with a real robot.
In order to maneuver autonomously on rough terrain, a mobile robot must constantly decide whether to traverse or circumnavigate terrain features ahead. This ability is called Obstacle Negotiation (ON). A critical aspect of ON is the so-called traversability analysis, which evaluates the level of difficulty associated with the traversal of the terrain. This paper presents a new method for traversability analysis, called T-transformation. It is implemented in a local terrain map as follows: (1) For each cell in the local terrain map, a square terrain patch is defined that symmetrically overlays the cell; (2) a plane is fitted to the data points in the terrain patch using a least-square approach and the slope of the least-squares plane and the residual of the fit are computed and used to calculate the Traversability Index (TI) for that cell; (3) after each cell is assigned a TI value, the local terrain map is transformed into a traversability map. The traversability map is further transformed into a traversability field histogram where each element represents the overall level of difficulty to move along the corresponding direction. Based on the traversability field histogram our reactive ON system then computes the steering and velocity commands to move the robot toward the intended goal while avoiding areas of poor traversability. The traversability analysis algorithm and the overall ON system were verified by extensive simulation. We verified our method partially through experiments on a Segway Robotics Mobility Platform (RMP), albeit only on flat terrain.
This paper introduces a new terrain mapping method for mobile robots with a 2-D laser rangefinder. In the proposed method, an elevation map and a certainty map are built and used for the filtering of erroneous data. The filter, called Certainty Assisted Spatial (CAS) filter, first employs the physical constraints on motion continuity and spatial continuity to distinguish corrupted pixels (e.g., due to artifacts, random noise, or the "mixed pixels" effect) and missing data from uncorrupted pixels in an elevation map. It then removes the corrupted pixels and missing data, while missing data is filled in by a Weighted Median filter. Uncorrupted pixels are left intact so as to retain edges of objects. Our extensive indoor and outdoor mapping experiments demonstrate that the CAS filter has better performance in erroneous data reduction and map detail preservation than existing filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.