ICRA 2011 Paper Abstract

Close

Paper WeA110.1

luo, ronghua (South China University of Technology), piao, song hao (Harbin Institute of Technology), Min, Huaqing (South China University of Technology)

Simultaneous Place and Object Recognition with Mobile Robot Using Pose Encoded Contextual Information

Scheduled for presentation during the Regular Sessions "Mapping and Navigation I" (WeA110), Wednesday, May 11, 2011, 08:20−08:35, Room 5E

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on December 8, 2019

Keywords Computer Vision for Robotics and Automation, Mapping

Abstract

Place and object recognition are two fundamental problems for mobile robot to understand its surroundings. In the field of computer vision it has been acknowledged that context plays an important role in image parsing, but in most of the researches contextual information is only used in one direction and little attention is paid to the relative pose context between objects and local features. We observe, however, place and object can serve as context to each other, that is the recognition of one facilitates the recognition of the other. In this paper, a new hierarchical random field which can encode multiple kinds of context including co-occurrence context, temporal context and relative pose context is proposed for simultaneous place and object recognition with a mobile platform. And a new kind of relative pose context, which is scale and rotation invariant, is defined to improve the stability of pose-encoded context. Experimental results with a mobile robot prove that the proposed method significantly improve the precision of the place and object recognition in familiar and unfamiliar environments.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-12-08  02:51:10 PST  Terms of use