ICRA 2011 Paper Abstract

Close

Paper WeA210.5

Case, Carl (Stanford University), Suresh, Bipin (Stanford University), Coates, Adam (Stanford University), Ng, Andrew (Stanford University)

Autonomous Sign Reading for Semantic Mapping

Scheduled for presentation during the Regular Sessions "Mapping and Navigation II" (WeA210), Wednesday, May 11, 2011, 11:05−11:20, Room 5E

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on December 8, 2019

Keywords Mapping, Autonomous Navigation, Personal Robots

Abstract

We consider the problem of automatically collecting semantic labels during robotic mapping by extending the mapping system to include text detection and recognition modules. In particular, we describe a system by which a SLAM-generated map of an office environment can be annotated with text labels such as room numbers and the names of office occupants. These labels are acquired automatically from signs posted on walls throughout a building. Deploying such a system using current text recognition systems, however, is difficult since even state-of-the-art systems have difficulty reading text from non-document images. Despite these difficulties we present a series of additions to the typical mapping pipeline that nevertheless allow us to create highly usable results. In fact, we show how our text detection and recognition system, combined with several other ingredients, allows us to generate an annotated map that enables our robot to recognize named locations specified by a user in 84% of cases.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-12-08  02:24:17 PST  Terms of use