Learning to combine bottom-up and top-down segmentation

Anat Levin*, Yair Weiss

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

63 Scopus citations


Bottom-up segmentation based only on low-level cues is a notoriously difficult problem. This difficulty has lead to recent top-down segmentation algorithms that are based on class-specific image information. Despite the success of top-down algorithms, they often give coarse segmentations that can be significantly refined using low-level cues. This raises the question of how to combine both top-down and bottom-up cues in a principled manner. In this paper we approach this problem using supervised learning. Given a training set of ground truth segmentations we train a fragment-based segmentation algorithm which takes into account both bottom-up and top-down cues simultaneously, in contrast to most existing algorithms which train top-down and bottom-up modules separately. We formulate the problem in the framework of Conditional Random Fields (CRF) and derive a feature induction algorithm for CRF, which allows us to efficiently search over thousands of candidate fragments. Whereas pure top-down algorithms often require hundreds of fragments, our simultaneous learning procedure yields algorithms with a handful of fragments that are combined with low-level cues to efficiently compute high quality segmentations.

Original languageAmerican English
Pages (from-to)105-118
Number of pages14
JournalInternational Journal of Computer Vision
Issue number1
StatePublished - Jan 2009


  • Image segmentation
  • Object detection


Dive into the research topics of 'Learning to combine bottom-up and top-down segmentation'. Together they form a unique fingerprint.

Cite this