Image-based view synthesis by combining trilinear tensors and learning techniques

S. Avidan*, T. Evgeniou, A. Shashua, T. Poggio

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

8 Scopus citations

Abstract

We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images (`view extrapolation'), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of `trilinear tensors' that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images - for the purpose of cheap and fast viewers that can run on standard hardware.

Original languageEnglish
Pages103-110
Number of pages8
DOIs
StatePublished - 1997
EventProceedings of the 1997 ACM Symposium on Virtual Reality Software and Technology, VRST - Lausanne, Switz
Duration: 15 Sep 199717 Sep 1997

Conference

ConferenceProceedings of the 1997 ACM Symposium on Virtual Reality Software and Technology, VRST
CityLausanne, Switz
Period15/09/9717/09/97

Fingerprint

Dive into the research topics of 'Image-based view synthesis by combining trilinear tensors and learning techniques'. Together they form a unique fingerprint.

Cite this