JumpCut: Non-successive mask transfer and interpolation for video cutout

Qingnan Fan, Fan Zhong, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

Research output: Contribution to journalArticlepeer-review

115 Scopus citations

Abstract

We introduce JumpCut, a new mask transfer and interpolation method for interactive video cutout. Given a source frame for which a foreground mask is already available, we compute an estimate of the foreground mask at another, typically non-successive, target frame. Observing that the background and foreground regions typically exhibit different motions, we leverage these differences by computing two separate nearest-neighbor fields (split-NNF) from the target to the source frame. These NNFs are then used to jointly predict a coherent labeling of the pixels in the target frame. The same split-NNF is also used to aid a novel edge classifier in detecting silhouette edges (S-edges) that separate the foreground from the background. A modified level set method is then applied to produce a clean mask, based on the pixel labels and the S-edges computed by the previous two steps. The resulting mask transfer method may also be used for coherently interpolating the foreground masks between two distant source frames. Our results demonstrate that the proposed method is significantly more accurate than the existing state-of-the-art on a wide variety of video sequences. Thus, it reduces the required amount of user effort, and provides a basis for an effective interactive video object cutout tool.

Original languageEnglish
Article number195
JournalACM Transactions on Graphics
Volume34
Issue number6
DOIs
StatePublished - Nov 2015

Bibliographical note

Publisher Copyright:
© Copyright 2015 ACM.

Keywords

  • Foreground extraction
  • Object cutout
  • Video segmentation

Fingerprint

Dive into the research topics of 'JumpCut: Non-successive mask transfer and interpolation for video cutout'. Together they form a unique fingerprint.

Cite this