ICRA'09 Paper Abstract

Close

Paper FrD11.2

Figueira, Dario (Instituto Superior Técnico), Lopes, Manuel (Instituto Superior Técnico), Ventura, Rodrigo (Instituto Superior Técnico), Ruesch, Jonas (IST)

From Pixels to Objects: Enabling a Spatial Model for Humanoid Social Robots

Scheduled for presentation during the Regular Sessions "Biologically-Inspired Robots - IV" (FrD11), Friday, May 15, 2009, 15:50−16:10, Room: 503

2009 IEEE International Conference on Robotics and Automation, May 12 - 17, 2009, Kobe, Japan

This information is tentative and subject to change. Compiled on January 21, 2022

Keywords Biologically-Inspired Robots, Humanoid Robots, Recognition

Abstract

This work adds the concept of object to an existent low-level attention system of the humanoid robot iCub. The objects are defined as clusters of SIFT visual features. When the robot first encounters an unknown object, found to be within a certain (small) distance from its eyes, it stores a cluster of the features present within an interval about that distance, using depth perception. Whenever a previously stored object crosses the robot's field of view again, it is recognized, mapped into an egocentrical frame of reference, and gazed at. This mapping is persistent, in the sense that its identification and position are kept even if not visible by the robot. Features are stored and recognized in a bottom-up way. Experimental results on the humanoid robot iCub validate this approach. This work creates the foundation for a way of linking the bottom-up attention system with top-down, object-oriented information provided by humans.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2022 PaperCept, Inc.
Page generated 2022-01-21  09:53:37 PST  Terms of use