Inducing semantic segmentation from an example

Yaar Schnitman*, Yaron Caspi, Daniel Cohen-Or, Dani Lischinski

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

22 Scopus citations

Abstract

Segmenting an image into semantically meaningful parts is a fundamental and challenging task in computer vision. Automatic methods are able to segment an image into coherent regions, but such regions generally do not correspond to complete meaningful parts. In this paper, we show that even a single training example can greatly facilitate the induction of a semantically meaningful segmentation on novel images within the same domain: images depicting the same, or similar, objects in a similar setting. Our approach constructs a non-parametric representation of the example segmentation by selecting patch-based representatives. This allows us to represent complex semantic regions containing a large variety of colors and textures. Given an input image, we first partition it into small homogeneous fragments, and the possible labelings of each fragment are assessed using a robust voting procedure. Graph-cuts optimization is then used to label each fragment in a globally optimal manner.

Original languageEnglish
Pages (from-to)373-384
Number of pages12
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3852 LNCS
DOIs
StatePublished - 2006
Event7th Asian Conference on Computer Vision, ACCV 2006 - Hyderabad, India
Duration: 13 Jan 200616 Jan 2006

Fingerprint

Dive into the research topics of 'Inducing semantic segmentation from an example'. Together they form a unique fingerprint.

Cite this