Constraint-fusion for interpretation of articulated objects

Yacov Hel-Or*, Michael Werman

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Scopus citations

Abstract

This paper presents a method for interpretation of modeled objects that is general enough to cover articulated and other types of constrained models. The flexibility between components of the model are expressed as spatial constraints which are fused into the pose estimation during the interpretation process. The constraint fusion assists in obtaining the correct interpretation and in reducing the search of possible correspondences. The proposed method can handle any constraint (including inequalities) between any number of different components of the model. The framework is based on Kalman filtering.

Original languageEnglish
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherPubl by IEEE
Pages39-45
Number of pages7
ISBN (Print)0818658274, 9780818658273
DOIs
StatePublished - 1994
Externally publishedYes
EventProceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Seattle, WA, USA
Duration: 21 Jun 199423 Jun 1994

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Conference

ConferenceProceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
CitySeattle, WA, USA
Period21/06/9423/06/94

Fingerprint

Dive into the research topics of 'Constraint-fusion for interpretation of articulated objects'. Together they form a unique fingerprint.

Cite this