ICRA 2011 Paper Abstract

Close

Paper TuP1-InteracInterac.41

Borji, Ali (University of Southern California (USC)), Itti, Laurent (University of Southern California)

Scene Classification with a Sparse Set of Salient Regions

Scheduled for presentation during the Poster Sessions "Interactive Session II: Systems, Control and Automation" (TuP1-InteracInterac), Tuesday, May 10, 2011, 13:40−14:55, Hall

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on March 30, 2020

Keywords Recognition, Visual Learning, Computer Vision for Robotics and Automation

Abstract

This work proposes an approach for scene classification by extracting and matching visual features only at the focuses of visual attention instead of the entire scene. Analysis over a database of natural scenes demonstrates that regions proposed by the saliency-based model of visual attention are robust to image transformations. Using a nearest neighbor classifier and a distance measure defined over the salient regions, we obtained 97.35% and 78.28% classification rates with SIFT and C2 features from the HMAX model at 5 salient regions covering at most 31% of the image. % of size 140. Classification with features extracted from the entire image results in 99.3% and 82.32% using SIFT and C2 features, respectively. Comparing attentional and adhoc approaches shows that classification rate of the first approach is 0.95 of the second. Overall, our results prove that efficient scene classification, in terms of reducing the complexity of feature extraction is possible without a significant drop in performance.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-03-30  01:12:10 PST  Terms of use