ICRA 2011 Paper Abstract

Close

Paper TuP103.2

Maier, Daniel (University of Freiburg), Bennewitz, Maren (University of Freiburg), Stachniss, Cyrill (University of Freiburg)

Self-Supervised Obstacle Detection for Humanoid Navigation Using Monocular Vision and Sparse Laser Data

Scheduled for presentation during the Regular Sessions "Humanoid Robots I" (TuP103), Tuesday, May 10, 2011, 13:55−14:10, Room 3D

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on April 16, 2014

Keywords Humanoid Robots, Visual Navigation

Abstract

In this paper, we present an approach to obstacle detection for collision-free, efficient humanoid robot navigation based on monocular images and sparse laser range data. To detect arbitrary obstacles in the surroundings of the robot, we analyze 3D~data points obtained from a 2D~laser range finder installed in the robot's head. Relying only on this laser data, however, can be problematic. While walking, the floor close to the robot's feet is not observable by the laser sensor, which inherently increases the risk of collisions, especially in non-static scenes. Furthermore, it is time-consuming to frequently stop walking and tilting the head to obtain reliable information about close obstacles. We therefore present a technique to train obstacle detectors for images obtained from a monocular camera also located in the robot's head. The training is done online based on sparse laser data in a self-supervised fashion. Our approach projects the obstacles identified from the laser data into the camera image and learns a classifier that considers color and texture information. While the robot is walking, it then applies the learned classifiers to the images to decide which areas are traversable. As we illustrate in experiments with a real humanoid, our approach enables the robot to reliably avoid obstacles during navigation. Furthermore, the results show that our technique leads to significantly more efficient navigation compared to extracting obstacles solely based on 3D~laser

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2014 PaperCept, Inc.
Page generated 2014-04-16  03:32:44 PST  Terms of use