ICRA 2011 Paper Abstract

Close

Paper TuP201.2

MIYATA, Natsuki (Inst. of Advanced Industrial Sci & Tech.), Motoki, Yuichi (Yokohama National University), Shimizu, Yuki (Yokohama National University), Maeda, Yusuke (Yokohama National University)

Individual Hand Model to Reconstruct Behavior from Motion Capture Data

Scheduled for presentation during the Regular Sessions "Human and Multi-Robot Interaction" (TuP201), Tuesday, May 10, 2011, 15:40−15:55, Room 3B

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on April 2, 2020

Keywords Gesture, Posture and Facial Expressions

Abstract

This paper proposes a method to build an individual hand model that consists of a surface skin and an inside link model, which can be used to reconstruct hand behavior from motion capture (MoCap) data. Our system uses a static posture data, a palmar side photo and marker positions captured simultaneously by MoCap, to reduce extra time and effort demanded for each subject to build a model. From this modeling scan, several hand dimensions and marker positions are obtained. Joint centers are estimated based on regression analysis about joint centers, marker positions and some hand dimensions derived from magnetic resonance (MR) images of eight subjects. The skin surface is built by scaling a generic hand model so that it satisfies the measured dimensions. The proposed system will be validated through an experiment to build four subjects' hand models.

 

 

Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-04-02  12:06:38 PST  Terms of use