Reflection separation using guided annotation

Ofer Springer, Yair Weiss

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Scopus citations

Abstract

Photographs taken through a glass surface often contain an approximately linear superposition of reflected and transmitted layers. Decomposing an image into these layers is generally an ill-posed task and the use of an additional image prior and user provided cues is presently necessary in order to obtain good results. Current annotation approaches rely on a strong sparsity assumption. For images with significant texture this assumption does not typically hold, thus rendering the annotation process unviable. In this paper we show that using a Gaussian Mixture Model patch prior, the correct local decomposition can almost always be found as one of 100 likely modes of the posterior. Thus, the user need only choose one of these modes in a sparse set of patches and the decomposition may then be completed automatically. We demonstrate the performance of our method using synthesized and real reflection images.

Original languageAmerican English
Title of host publication2017 IEEE International Conference on Image Processing, ICIP 2017 - Proceedings
PublisherIEEE Computer Society
Pages1192-1196
Number of pages5
ISBN (Electronic)9781509021758
DOIs
StatePublished - 2 Jul 2017
Event24th IEEE International Conference on Image Processing, ICIP 2017 - Beijing, China
Duration: 17 Sep 201720 Sep 2017

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2017-September
ISSN (Print)1522-4880

Conference

Conference24th IEEE International Conference on Image Processing, ICIP 2017
Country/TerritoryChina
CityBeijing
Period17/09/1720/09/17

Bibliographical note

Publisher Copyright:
© 2017 IEEE.

Keywords

  • Natural image statistics
  • Reflection separation

Fingerprint

Dive into the research topics of 'Reflection separation using guided annotation'. Together they form a unique fingerprint.

Cite this