Unified mixture framework for motion segmentation: incorporating spatial coherence and estimating the number of models

Yair Weiss*, Edward H. Adelson

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

172 Scopus citations

Abstract

Describing a video sequence in terms of a small number of coherently moving segments is useful for tasks ranging from video compression to event perception. A promising approach is to view the motion segmentation problem in a mixture estimation framework. However, existing formulations generally use only the motion data and thus fail to make use of static cues when segmenting the sequence. Furthermore, the number of models is either specified in advance or estimated outside the mixture model framework. In this work we address both of these issues. We show how to add spatial constraints to the mixture formulations and present a variant of the EM algorithm that makes use of both the form and the motion constraints. Moreover this algorithm estimates the number of segments given knowledge about the level of model failure expected in the sequence. The algorithm's performance is illustrated on synthetic and real image sequences.

Original languageEnglish
Pages (from-to)321-326
Number of pages6
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
StatePublished - 1996
Externally publishedYes
EventProceedings of the 1996 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - San Francisco, CA, USA
Duration: 18 Jun 199620 Jun 1996

Fingerprint

Dive into the research topics of 'Unified mixture framework for motion segmentation: incorporating spatial coherence and estimating the number of models'. Together they form a unique fingerprint.

Cite this