Synthesizing realistic facial expressions from photographs

Frédéric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, David H. Salesin

Research output: Contribution to conferencePaperpeer-review

108 Scopus citations


We present new techniques for creating photorealistic textured 3D facial models from photographs of a human subject, and for creating smooth transitions between different facial expressions by mor-phing between these different models. Starting from several uncali-brated views of a human subject, we employ a user-assisted technique to recover the camera poses corresponding to the views as well as the 3D coordinates of a sparse set of chosen locations on the subject's face. A scattered data interpolation technique is then used to deform a generic face mesh to fit the particular geometry of the subject's face. Having recovered the camera poses and the facial geometry, we extract from the input images one or more texture maps for the model. This process is repeated for several facial expressions of a particular subject. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models, while at the same time blending the corresponding textures. Using our technique, we have been able to generate highly realistic face models and natural looking animations.

Original languageAmerican English
Number of pages10
StatePublished - 1998
Event25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998 - Orlando, FL, United States
Duration: 19 Jul 199824 Jul 1998


Conference25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998
Country/TerritoryUnited States
CityOrlando, FL


  • Facial animation
  • Facial expression generation
  • Facial modeling
  • Morphing
  • Photogrammetry
  • View-dependent texture-mapping


Dive into the research topics of 'Synthesizing realistic facial expressions from photographs'. Together they form a unique fingerprint.

Cite this