Synthesizing realistic facial expressions from photographs

Frederic Pighin*, Jamie Hecker, Dani Lischinski, Richard Szeliski, David H. Salesin

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

539 Scopus citations


We present new techniques for creating photorealistic textured 3D facial models from photographs of a human subject, and for creating smooth transitions between different facial expressions by morphing between these different models. Starting from several uncalibrated views of a human subject, we employ a user-assisted technique to recover the camera poses corresponding to the views as well as the 3D coordinates of a sparse set of chosen locations on the subject's face. A scattered data interpolation technique is then used to deform a generic face mesh to fit the particular geometry of the subject's face. Having recovered the camera poses and the facial geometry, we extract from the input images one or more texture maps for the model. This process is repeated for several facial expressions of a particular subject. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models, while at the same time blending the corresponding textures. Using our technique, we have been able to generate highly realistic face models and natural looking animations.

Original languageAmerican English
Number of pages10
StatePublished - 1998
Externally publishedYes
EventProceedings of the 1998 Annual Conference on Computer Graphics, SIGGRAPH - Orlando, FL, USA
Duration: 19 Jul 199824 Jul 1998


ConferenceProceedings of the 1998 Annual Conference on Computer Graphics, SIGGRAPH
CityOrlando, FL, USA


Dive into the research topics of 'Synthesizing realistic facial expressions from photographs'. Together they form a unique fingerprint.

Cite this