2010 IEEE International Conference on Robotics and Automation :: Anchorage, Alaska, USA :: May 3 - 8, 2010
   

ICRA'10 Paper Abstract

Close

Paper TuD4.6

Klingbeil, Ellen (Stanford University), Carpenter, Blake (Stanford University), Russakovsky, Olga (Stanford University), Ng, Andrew (Stanford University)

Autonomous Operation of Novel Elevators for Robot Navigation

Scheduled for presentation during the Regular Sessions "Personal and Service Robots" (TuD4), Tuesday, May 4, 2010, 15:35−15:50, Egan Center Lower Level Room 1

2010 IEEE International Conference on Robotics and Automation, May 3-8, 2010, Anchorage, Alaska, USA

This information is tentative and subject to change. Compiled on November 18, 2017

Keywords Mobile Manipulation, Computer Vision for Robotics and Automation, Autonomous Navigation

Abstract

Although robot navigation in indoor environments has achieved great success, robots are unable to fully navigate these spaces without the ability to open doors and operate elevators, including those which the robot has not seen before. In this paper, we focus on the key challenge of autonomous detection and interaction with an unknown elevator button panel. A number of factors, such as lack of useful 3D features, large variety of elevator panel designs, variation in lighting conditions, and small size of elevator buttons, render this task quite difficult.

To address the task of detecting, localizing, and labeling the buttons, we use state-of-the-art vision algorithms and machine learning techniques to take advantage of contextual features. To verify our approach, we train and test the vision-based algorithms on completely separate elevator panel datasets. Using a mobile robot platform, we then validate our algorithms in experiments where, using only its on-board sensors, the robot autonomously interprets the panel and presses the appropriate button in elevators never seen before by the robot. In a total of 14 trials performed on 3 different elevators, our robot succeeded in localizing the requested button in all 14 trials and in pressing it correctly in 13 of the 14 trials. On the more diverse, offline test data set, our vision algorithm succeeded in correctly localizing and labeling 80% of the buttons.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2017 PaperCept, Inc.
Page generated 2017-11-18  19:11:10 PST  Terms of use