ICRA 2011 Paper Abstract


Paper TuA114.2

Carlevaris-Bianco, Nicholas (University of Michigan), Eustice, Ryan (University of Michigan)

Multi-View Registration for Feature-Poor Underwater Imagery

Scheduled for presentation during the Regular Sessions "Visual Navigation I" (TuA114), Tuesday, May 10, 2011, 08:35−08:50, Room 5J

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on April 2, 2020

Keywords Visual Navigation, SLAM, Field Robots


This paper reports an algorithm for the registration of images with low overlap and low visual feature density— a typical characteristic of down-looking underwater imagery. Our algorithm exploits locally accurate temporal motion-priors and pairwise image correspondences to aggregate semi-rigid sets of sequential images. These sets are then used to search for visual correspondences across sets instead of between individual pairs of images. By simultaneously searching over multiple views, we increase the physical area seen by more than one image, effectively increasing the “field of view” of the image correspondence search. This increases the probability that the area viewed by both sets will contain enough visual features to register the sets. Our algorithm systematically reduces the uncertainty in the motion prior between the two sets resulting in a refined motion prior that is used to geometrically constrain the correspondence search between sets. This geometric constraint allows us to confidently identify local correspondences that would not be possible globally, further increasing our ability to register images in feature poor environments. We present results using a real-world ship hull inspection data set collected by an autonomous underwater vehicle.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-04-02  13:04:50 PST  Terms of use