ICRA 2012 Paper Abstract


Paper TuC310.2

Huang, Xiaoxia (Clemson University), Walker, Ian (Clemson University), Birchfield, Stan (Clemson University)

Occlusion-Aware Reconstruction and Manipulation of 3D Articulated Objects

Scheduled for presentation during the Interactive Session "Interactive Session TuC-3" (TuC310), Tuesday, May 15, 2012, 15:30−16:00, Ballroom D

2012 IEEE International Conference on Robotics and Automation, May 14-18, 2012, RiverCentre, Saint Paul, Minnesota, USA

This information is tentative and subject to change. Compiled on August 19, 2018

Keywords Computer Vision for Robotics and Automation, Visual Learning, Grasping


We present a method to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of the object in two different configurations. A novel combination of Procrustes analysis and RANSAC facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, enabling the robotic system to plan paths to parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of objects with both revolute and prismatic joints.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2018 PaperCept, Inc.
Page generated 2018-08-19  23:32:14 PST  Terms of use