ICRA 2012 Paper Abstract


Paper TuB110.2

Sung, Jaeyong (Cornell University), Ponce, Colin (Cornell University), Selman, Bart (Cornell University), Saxena, Ashutosh (Cornell University)

Unstructured Human Activity Detection from RGBD Images

Scheduled for presentation during the Interactive Session "Interactive Session TuB-1" (TuB110), Tuesday, May 15, 2012, 10:30−11:00, Ballroom D

2012 IEEE International Conference on Robotics and Automation, May 14-18, 2012, RiverCentre, Saint Paul, Minnesota, USA

This information is tentative and subject to change. Compiled on August 20, 2018

Keywords Human Detection & Tracking, Learning and Adaptive Systems, Visual Learning


Being able to detect and recognize human activities is essential for several applications, including personal assistive robotics. In this paper, we perform detection and recognition of unstructured human activity in unstructured environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and compute a set of features based on human pose and motion, as well as based on image and point-cloud information. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM), which considers a person's activity as composed of a set of sub-activities. We infer the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve good performance even when the person was not seen before in the training set.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2018 PaperCept, Inc.
Page generated 2018-08-20  08:15:16 PST  Terms of use