EgoSampling: Wide View Hyperlapse from Egocentric Videos

Tavi Halperin, Yair Poleg, Chetan Arora, Shmuel Peleg

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

The possibility of sharing one's point of view makes the use of wearable cameras compelling. These videos are often long, boring, and coupled with extreme shaking, as the camera is worn on a moving person. Fast-forwarding (i.e., frame sampling) is a natural choice for quick video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast-forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives stable, fast-forwarded, hyperlapse videos. Adaptive frame sampling is formulated as an energy minimization problem, whose optimal solution can be found in polynomial time. We further turn the camera shake from a drawback into a feature, enabling the increase in field of view of the output video. This is obtained when each output frame is mosaiced from several input frames. The proposed technique also enables the generation of a single hyperlapse video from multiple egocentric videos, allowing even faster video consumption.

Original languageEnglish
Pages (from-to)1248-1259
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume28
Issue number5
DOIs
StatePublished - May 2018

Bibliographical note

Publisher Copyright:
© 1991-2012 IEEE.

Keywords

  • Egocentric video
  • fast-forward
  • hyperlapse
  • video stabilization

Fingerprint

Dive into the research topics of 'EgoSampling: Wide View Hyperlapse from Egocentric Videos'. Together they form a unique fingerprint.

Cite this