IROS 2015 Paper Abstract

Close

Paper ThAP.41

Ireta Munoz, Fernando Israel (Universite de Nice Sophia Antipolis), Comport, Andrew Ian (CNRS-I3S/UNS)

Direct Matching for Improving Image-Based Registration

Scheduled for presentation during the Poster session "Late Breaking Posters" (ThAP), Thursday, October 1, 2015, 09:45−10:00, Saal G1

2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sept 28 - Oct 03, 2015, Congress Center Hamburg, Hamburg, Germany

This information is tentative and subject to change. Compiled on July 19, 2019

Keywords Computer Vision, Visual Tracking, Sensor Fusion

Abstract

Nowadays, image and depth maps obtained by RGB-D sensors are useful for creating 3D models, computing visual odometry and performing autonomous navigation. One of the most fundamental problems is solving the registration that provides the alignment between two measurements acquired at different poses. Direct methods solve the problem by minimizing the photometric and the geometric error directly in sensor space as opposed to ICP or feature-based image approaches which require feature extraction and matching.

To the best of our knowledge, no image-based approaches have performed yet similar techniques on “direct” image data and only feature-based approaches have performed matching as stereo matching, histograms and intensity matching where most of these techniques extract important features such as salient and distinctive points that are manually or automatically detected.

This poster proposes a new method that introduce a matching step for direct approaches which allows to increase the convergence domain and speed up alignment whilst maintaining the robust and accurate properties of direct approaches. The proposed method is inspired from closest point matching in ICP, but instead of matching geometric points, the closest point is found in image space. A matching strategy, which is based on kd-tree (k-dimensional) approaches, is proposed based only on the intensities of the pixels without feature extraction, where the best match is decided by the closest points in intensity and image formation.

It will also be shown how direct matching can be extended to simultaneously register both color and depth data and further improve results when RGB-D sensors are available. An experimental results are presented, which verify the performance of the proposed method on both simulated and real data with ground-truth, demonstrating that the proposed method definitively optimize hybrid methods (geometry + photometry), accelerating the convergence and reducing the computational cost.

 

 

Technical Content © IEEE Robotics & Automation Society


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-07-19  13:45:11 PST  Terms of use