ICRA 2011 Paper Abstract


Paper TuP204.3

De Schutter, Joris (Katholieke Universiteit Leuven), Di Lello, Enrico (K.U. Leuven), De Schutter, Jochem F.M. (Katholieke Universiteit Leuven), Matthysen, Roel (Katholieke Universiteit Leuven), Benoit, Tuur (Katholieke Universiteit Leuven), De Laet, Tinne (Katholieke Universiteit Leuven)

Recognition of 6 DOF Rigid Body Motion Trajectories Using a Coordinate-Free Representation

Scheduled for presentation during the Regular Sessions "Recognition I" (TuP204), Tuesday, May 10, 2011, 15:55−16:10, Room 3E

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on April 2, 2020

Keywords Recognition, Sensor Fusion


This paper presents an approach to recognize 6 DOF rigid body motion trajectories (3D translation + rotation), such as the 6 DOF motion trajectory of an object manipulated by a human. As a first step in the recognition process, 3D measured position trajectories of arbitrary and uncalibrated points attached to the rigid body are transformed to an invariant, coordinate-free representation of the rigid body motion trajectory. This invariant representation is independent of the reference frame in which the motion is observed, the chosen marker positions, the linear scale (magnitude) of the motion, the time scale and the velocity profile along the trajectory. Two classification algorithms which use the invariant representation as input are developed and tested experimentally: one approach based on a Dynamic Time Warping algorithm, and one based on Hidden Markov Models. Both approaches yield high recognition rates (up to 95 % and 91 %, respectively). The advantage of the invariant approach is that motion trajectories observed in different contexts (with different reference frames, marker positions, time scales, linear scales, velocity profiles) can be compared and averaged, which allows us to build models from multiple demonstrations observed in different contexts, and use these models to recognize similar motion trajectories in still different contexts.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-04-02  12:51:54 PST  Terms of use