TY - GEN
T1 - Snap image composition
AU - Pritch, Yael
AU - Poleg, Yair
AU - Peleg, Shmuel
PY - 2011
Y1 - 2011
N2 - Snap Composition broadens the applicability of interactive image composition. Current tools, like Adobe's Photomerge Group Shot, do an excellent job when the background can be aligned and objects have limited motion. Snap Composition works well even when the input images include different objects and the backgrounds cannot be aligned. The power of Snap Composition comes from the ability to assign for every output pixel a source pixel in any input image, and from any location in that image. An energy value is computed for each such assignment, representing both the user constraints and the quality of composition. Minimization of this energy gives the desired composition. Composition is performed once a user marks objects in the different images, and optionally drags them into a new location in the target canvas. The background around the dragged objects, as well as the final locations of the objects themselves, will be automatically computed for seamless composition. If the user does not drag the selected objects to a desired place, they will automatically snap into a suitable location. A video describing the results can be seen in www.vision.huji.ac.il/shiftmap/SnapVideo.mp4.
AB - Snap Composition broadens the applicability of interactive image composition. Current tools, like Adobe's Photomerge Group Shot, do an excellent job when the background can be aligned and objects have limited motion. Snap Composition works well even when the input images include different objects and the backgrounds cannot be aligned. The power of Snap Composition comes from the ability to assign for every output pixel a source pixel in any input image, and from any location in that image. An energy value is computed for each such assignment, representing both the user constraints and the quality of composition. Minimization of this energy gives the desired composition. Composition is performed once a user marks objects in the different images, and optionally drags them into a new location in the target canvas. The background around the dragged objects, as well as the final locations of the objects themselves, will be automatically computed for seamless composition. If the user does not drag the selected objects to a desired place, they will automatically snap into a suitable location. A video describing the results can be seen in www.vision.huji.ac.il/shiftmap/SnapVideo.mp4.
UR - http://www.scopus.com/inward/record.url?scp=80054872987&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-24136-9_16
DO - 10.1007/978-3-642-24136-9_16
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:80054872987
SN - 9783642241352
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 181
EP - 191
BT - Computer Vision/Computer Graphics Collaboration Techniques - 5th International Conference, MIRAGE 2011, Proceedings
T2 - 5th International Conference on Computer Vision/Computer Graphics Collaboration Techniques, MIRAGE 2011
Y2 - 10 October 2011 through 11 October 2011
ER -