ICRA 2011 Paper Abstract


Paper WeP112.4

Begum, Momotaz (University of Waterloo), Karray, Fakhri (University of Waterloo)

Integrating Visual Exploration and Visual Search in Robotic Visual Attention: The Role of Human-Robot Interaction

Scheduled for presentation during the Regular Sessions "Learning and Adaptive Systems I" (WeP112), Wednesday, May 11, 2011, 14:25−14:40, Room 5H

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on July 14, 2020

Keywords Cognitive Human-Robot Interaction, Biologically-Inspired Robots, Domestic Robots


A common characteristics of the computational models of visual attention is they execute the two modes of visual attention (visual exploration and visual search) separately. This makes a visual attention model unsuitable for real-world robotic applications. This paper focuses on integrating visual exploration and visual search in a common framework of visual attention and the challenges resulting from such integration. It proposes a visual attention-oriented speech-based human robot interaction framework which helps a robot to switch back-and-forth between the two modes of visual attention. A set of experiments are presented to demonstrate the performance of the proposed framework.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-07-14  16:07:48 PST  Terms of use