Constraint fusion for recognition and localization of articulated objects

Yacov Hel-Or*, Michael Werman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

15 Scopus citations


This paper presents a method for localization and interpretation of modeled objects that is general enough to cover articulated and other types of constrained models. The flexibility between the components of the model is expressed as spatial constraints that are fused into the pose estimation during the interpretation process. The constraint fusion assists in obtaining a precise and stable pose of each of the object's components and in finding the correct interpretation. The proposed method can handle any constraint (including inequalities) between any number of different components of the model. The framework is based on Kalman filtering.

Original languageAmerican English
Pages (from-to)5-28
Number of pages24
JournalInternational Journal of Computer Vision
Issue number1
StatePublished - 1996


Dive into the research topics of 'Constraint fusion for recognition and localization of articulated objects'. Together they form a unique fingerprint.

Cite this