ICRA 2011 Paper Abstract


Paper TuP1-InteracInterac.1

Dai, Jingwen (The Chinese University of Hong Kong), Chung, Ronald (The Chinese University of Hong Kong)

Head Pose Estimation by Imperceptible Structured Light Sensing

Scheduled for presentation during the Poster Sessions "Interactive Session II: Systems, Control and Automation" (TuP1-InteracInterac), Tuesday, May 10, 2011, 13:40−14:55, Hall

2011 IEEE International Conference on Robotics and Automation, May 9-13, 2011, Shanghai International Conference Center, Shanghai, China

This information is tentative and subject to change. Compiled on March 30, 2020

Keywords Gesture, Posture and Facial Expressions, Computer Vision for Robotics and Automation


We describe a method of estimating head pose in space by imperceptible structured light sensing. Firstly, through elaborate pattern projection strategy and camera-projector synchronization, pattern-illuminated images of the subject and the corresponding scene-texture image are captured under imperceptible patterned illumination. 3D positions of the key facial feature points are then derived by a combined use of (1) the 2D facial feature points in the scene-texture image that are localized by AAM, and (2) the point cloud generated by structured light sensing. Eventually, the head orientation and translation are estimated by SVD of a correlation matrix that is generated from the 3D corresponding feature point pairs over the various image frames. Extensive experiments show that the proposed method is effective, accurate, and fast in 6-DOF head pose estimation, making it suitable for use in real-time applications.



Technical Content © IEEE Robotics & Automation Society

This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2020 PaperCept, Inc.
Page generated 2020-03-30  01:12:16 PST  Terms of use