Human-assisted motion annotation

Ce Liu*, William T. Freeman, Edward H. Adelson, Yair Weiss

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

170 Scopus citations

Abstract

Obtaining ground-truth motion for arbitrary, real-world video sequences is a challenging but important task for both algorithm evaluation and model design. Existing ground-truth databases are either synthetic, such as the Yosemite sequence, or limited to indoor, experimental setups, such as the database developed in [5]. We propose a human-in-loop methodology to create a ground-truth motion database for the videos taken with ordinary cameras in both indoor and outdoor scenes, using the fact that human beings are experts at segmenting objects and inspecting the match between two frames. We designed an interactive computer vision system to allow a user to efficiently annotate motion. Our methodology is cross-validated by showing that human annotated motion is repeatable, consistent across annotators, and close to the ground truth obtained by [5]. Using our system, we collected and annotated 10 indoor and outdoor real-world videos to form a ground-truth motion database. The source code, annotation tool and database is online for public evaluation and benchmarking.

Original languageEnglish
Title of host publication26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
DOIs
StatePublished - 2008
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR - Anchorage, AK, United States
Duration: 23 Jun 200828 Jun 2008

Publication series

Name26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

Conference

Conference26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Country/TerritoryUnited States
CityAnchorage, AK
Period23/06/0828/06/08

Fingerprint

Dive into the research topics of 'Human-assisted motion annotation'. Together they form a unique fingerprint.

Cite this