Computing the epipolar geometry from feature points between cameras with very different viewpoints is often error prone, as an object's appearance can vary greatly between images. For such cases, it has been shown that using motion extracted from video can achieve much better results than using a static image. This paper extends these earlier works based on the scene dynamics. In this paper we propose a new method to compute the epipolar geometry from a video stream, by exploiting the following observation: For a pixel p in Image A, all pixels corresponding to p in Image B are on the same epipolar line. Equivalently, the image of the line going through camera A's center and p is an epipolar line in B. Therefore, when cameras A and B are synchronized, the momentary images of two objects projecting to the same pixel, p, in camera A at times t1 and t2, lie on an epipolar line in camera B. Based on this observation we achieve fast and precise computation of epipolar lines. Calibrating cameras based on our method of finding epipolar lines is much faster and more robust than previous methods.
|Original language||American English|
|Title of host publication||Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||9|
|State||Published - 3 May 2018|
|Event||18th IEEE Winter Conference on Applications of Computer Vision, WACV 2018 - Lake Tahoe, United States|
Duration: 12 Mar 2018 → 15 Mar 2018
|Name||Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018|
|Conference||18th IEEE Winter Conference on Applications of Computer Vision, WACV 2018|
|Period||12/03/18 → 15/03/18|
Bibliographical noteFunding Information:
This research was supported by the Israel Science Foundation and by the Israel Ministry of Science and Technology.
© 2018 IEEE.