ICRA 2011 Paper Abstract

Close

Paper WeA105.5

Pandey, Gaurav (University of Michigan), McBride, James (Ford Motor Company), Savarese, Silvio (University of Michigan), Eustice, Ryan (University of Michigan)

Visually Bootstrapped Generalized ICP

Scheduled for presentation during the Regular Sessions "SLAM I" (WeA105), Wednesday, May 11, 2011, 09:20−09:35, Room 3G

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on December 8, 2019

Keywords Computer Vision for Robotics and Automation, Sensor Fusion, SLAM

Abstract

This paper reports a novel algorithm for boot-strapping the automatic registration of unstructured 3D point clouds collected using co-registered 3D lidar and omnidirectional camera imagery. Here, we exploit the co-registration of the 3D point cloud with the available camera imagery to associate high dimensional feature descriptors such as scale invariant feature transform (SIFT) or speeded up robust features (SURF) to the 3D points. We first establish putative point correspondence in the high dimensional feature space and then use these correspondences in a random sample consensus (RANSAC) framework to obtain an initial rigid body transformation that aligns the two scans. This initial transformation is then refined in a generalized iterative closest point (ICP) framework. The proposed method is completely data driven and does not require any initial guess on the transformation. We present results from a real world dataset collected by a vehicle equipped with a 3D laser scanner and an omnidirectional camera.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-12-08  02:27:04 PST  Terms of use