Robust head pose estimation by fusing time-of-flight depth and color

Amit Bleiweiss*, Michael Werman

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Scopus citations

Abstract

We present a new solution for real-time head pose estimation. The key to our method is a model-based approach based on the fusion of color and time-of-flight depth data. Our method has several advantages over existing head-pose estimation solutions. It requires no initial setup or knowledge of a pre-built model or training data. The use of additional depth data leads to a robust solution, while maintaining real-time performance. The method outperforms the state-of-the art in several experiments using extreme situations such as sudden changes in lighting, large rotations, and fast motion.

Original languageEnglish
Title of host publication2010 IEEE International Workshop on Multimedia Signal Processing, MMSP2010
Pages116-121
Number of pages6
DOIs
StatePublished - 2010
Event2010 IEEE International Workshop on Multimedia Signal Processing, MMSP2010 - Saint Malo, France
Duration: 4 Oct 20106 Oct 2010

Publication series

Name2010 IEEE International Workshop on Multimedia Signal Processing, MMSP2010

Conference

Conference2010 IEEE International Workshop on Multimedia Signal Processing, MMSP2010
Country/TerritoryFrance
CitySaint Malo
Period4/10/106/10/10

Fingerprint

Dive into the research topics of 'Robust head pose estimation by fusing time-of-flight depth and color'. Together they form a unique fingerprint.

Cite this