ICRA 2012 Paper Abstract

Close

Paper TuD09.2

Ude, Ales (Jozef Stefan Institute), Schiebener, David (University of Karlsruhe), Sugimoto, Norikazu (ATR Computational Neuroscience Laboratories), Morimoto, Jun (ATR Computational Neuroscience Labs)

Integrating surface-based hypotheses and manipulation for autonomous segmentation and learning of object representations

Scheduled for presentation during the Regular Session "Sensing for manipulation" (TuD09), Tuesday, May 15, 2012, 16:45−17:00, Meeting Room 9 (Sa)

2012 IEEE International Conference on Robotics and Automation, May 14-18, 2012, RiverCentre, Saint Paul, Minnesota, USA

This information is tentative and subject to change. Compiled on February 23, 2018

Keywords Visual Learning, Recognition

Abstract

Learning about new objects that a robot sees for the first time is a difficult problem because it is not clear how to define the concept of object in general terms. In this paper we consider as objects those physical entities that are comprised of features which move consistently when the robot acts upon them. Among the possible actions that a robot could apply to a hypothetical object, pushing seems to be the most suitable one due to its relative simplicity and general applicability. We propose a methodology to generate and apply pushing actions to hypothetical objects. A probing push causes visual features to move, which enables the robot to either confirm or reject the initial hypothesis about existence of the object. Furthermore, the robot can discriminate the object from the background and accumulate visual features that are useful for training of state of the art statistical classifiers such as bag of features.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2018 PaperCept, Inc.
Page generated 2018-02-23  15:52:02 PST  Terms of use