Reconstruction of high resolution 3D visual information

M. Berthod*, H. Shekarforoush, M. Werman, J. Zerubia

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

Given a set of low resolution camera images, it is possible to reconstruct high resolution luminance and depth information, specially if the relative displacements of the image frames are known. We have proposed iterative algorithms for recovering high resolution albedo and depth maps that require no a priori knowledge of the scene, and therefore do not depend on other methods, as regards boundary and initial conditions. The problem of surface reconstruction has been formulated as one of Expectation Maximization (EM) and has been tackled in a probabilistic framework using Markov Random Fields (MRF) [1][3]. As for the depth map, our method is directly recovering surface heights without referring to surface orientations, while increasing the resolution by camera jittering [2]. Conventional statistical models have been coupled with geometrical techniques to construct a general model of the world and the imaging process.

Original languageEnglish
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherPubl by IEEE
Pages654-657
Number of pages4
ISBN (Print)0818658274, 9780818658273
DOIs
StatePublished - 1994
Externally publishedYes
EventProceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Seattle, WA, USA
Duration: 21 Jun 199423 Jun 1994

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

ConferenceProceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
CitySeattle, WA, USA
Period21/06/9423/06/94

Fingerprint

Dive into the research topics of 'Reconstruction of high resolution 3D visual information'. Together they form a unique fingerprint.

Cite this