ICRA 2011 Paper Abstract

Close

Paper WeAV215.8

Li, Congcong (Cornell University), Wong, TP (Cornell University), Xu, Norris (Cornell University), Saxena, Ashutosh (Cornell University)

FeCCM for Scene Understanding: Helping the Robot to Learn Multiple Tasks

Scheduled for presentation during the Video Sessions "Video Session II: Humanoid and Service Robotics" (WeAV215), Wednesday, May 11, 2011, 11:01−11:09, Room 3A

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on December 8, 2019

Keywords Visual Learning, Recognition, Personal Robots

Abstract

Helping a robot to understand a scene can include many sub-tasks, such as scene categorization, object detection, geometric labeling, etc. Each sub-task is notoriously hard, and state-of-art classifiers exist for many sub-tasks. It is desirable to have an algorithm that can capture such correlation without requiring to make any changes to the inner workings of any classifier, and therefore make the perception for a robot better. We have recently proposed a generic model (Feedback Enabled Cascaded Classification Model) that enables us to easily take state-of-art classifiers as black-boxes and improve performance.

In this video, we show that we can use our FeCCM model to quickly combine existing classifiers for various sub-tasks, and build a shoe finder robot in a day. The video shows our robot using FeCCM to find a shoe on request.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-12-08  03:41:09 PST  Terms of use