ICRA 2011 Paper Abstract

Close

Paper WeP112.4

Begum, Momotaz (University of Waterloo), Karray, Fakhri (University of Waterloo)

Integrating Visual Exploration and Visual Search in Robotic Visual Attention: The Role of Human-Robot Interaction

Scheduled for presentation during the Regular Sessions "Learning and Adaptive Systems I" (WeP112), Wednesday, May 11, 2011, 14:25−14:40, Room 5H

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on December 8, 2019

Keywords Cognitive Human-Robot Interaction, Biologically-Inspired Robots, Domestic Robots

Abstract

A common characteristics of the computational models of visual attention is they execute the two modes of visual attention (visual exploration and visual search) separately. This makes a visual attention model unsuitable for real-world robotic applications. This paper focuses on integrating visual exploration and visual search in a common framework of visual attention and the challenges resulting from such integration. It proposes a visual attention-oriented speech-based human robot interaction framework which helps a robot to switch back-and-forth between the two modes of visual attention. A set of experiments are presented to demonstrate the performance of the proposed framework.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2019 PaperCept, Inc.
Page generated 2019-12-08  02:12:47 PST  Terms of use