ICRA 2012 Paper Abstract

Close

Paper TuA110.2

Teo, Ching Lik (University of Maryland), Yang, Yezhou (University of Maryland), Daume III, Hal (University of Maryland, College Park), Fermuller, Cornelia (University of Maryland), Aloimonos, Yiannis (University of Maryland)

Towards a Watson That Sees: Language-Guided Action Recognition for Robots

Scheduled for presentation during the Interactive Session "Interactive Session TuA-1" (TuA110), Tuesday, May 15, 2012, 08:30−09:00, Ballroom D

2012 IEEE International Conference on Robotics and Automation, May 14-18, 2012, RiverCentre, Saint Paul, Minnesota, USA

This information is tentative and subject to change. Compiled on December 13, 2017

Keywords Computer Vision for Robotics and Automation, Visual Learning, Recognition

Abstract

For robots of the future to interact seamlessly with humans, they must be able to reason about their surroundings and take actions that are appropriate to the situation. Such reasoning is only possible when the robot has knowledge of how the World functions, which must either be learned or hard-coded. In this paper, we propose an approach that exploits language as an important resource of high-level knowledge that a robot can use, akin to IBM's Watson in Jeopardy!. In particular, we show how language can be leveraged to reduce the ambiguity that arises from recognizing actions involving hand-tools from video data. Starting from the premise that tools and actions are intrinsically linked, with one explaining the existence of the other, we trained a language model over a large corpus of English newswire text so that we can extract this relationship directly. This model is then used as a prior to select the best tool and action that explains the video. We formalize the approach in the context of 1) an unsupervised recognition and 2) a supervised classification scenario by an EM formulation for the former and integrating language features for the latter. Results are validated over a new hand-tool action dataset, and comparisons with state of the art STIP features showed significantly improved results when language is used. In addition, we discuss the implications of these results and how it provides a framework for integrating language into vision on other robotic applications.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2017 PaperCept, Inc.
Page generated 2017-12-13  22:10:12 PST  Terms of use