Skip to main content

Ego2Top: Matching Viewers in Egocentric and Top-view Videos (ECCV 2016)

 

Introduction

Thanks to the availability and increasing popularity of wearable devices such as GoPro cameras, smart phones, and glasses, we have now access to a plethora of videos captured from the first person perspective. Surveillance cameras and Unmanned Aerial Vehicles (UAVs) also offer tremendous amounts of video data recorded from top and oblique view points. Egocentric and surveillance vision have been studied extensively but separately in the computer vision community. The relationship between these two domains, however, remains unexplored. In this study, we make the first attempt in this direction by addressing two basic yet challenging questions.


First, having a set of egocentric videos and a top-view video, does the top-view video contain all or some of the egocentric viewers? In other words, have these videos been shot in the same environment at the same time? Second, if so, how can we identify the egocentric viewers in the top-view video?

Our Approach

To answer the first question, we compare the egocentric set with different top-view videos and rank the top-view videos based on their likelihood of containing the egocentric viewers. We model each view (egocentric or top) using a graph, extract features for nodes and edges of the graph, and compare the egocentric and top-view graph using spectral graph matching. The graph matching score helps us to answer the first question.


To answer the second question, assuming the top-view video contains the egocentric viewers we perform a hard assignment between the two graphs and find the best viewer assignment.

Downloads

PDF file of the paper can be downloaded here .
Ego2Top dataset containing annotated top-view and egocentric videos can be downloaded here .

Related Publications

Shervin Ardeshir and Ali Borji, “Ego2Top: Matching Viewers in Egocentric and Top-view Videos”, in Proceedings of IEEE European Conference on Computer Vision (ECCV), October 2016 [PDF | Project Page]

Take a look at a few of our other papers related to adapting top-view and ground level information:

Shervin Ardeshir, Kofi Malcolm Collins-Sibley and Mubarak Shah, “Geo-semantic Segmentation”, in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 [PDF | Project Page]

Shervin ArdeshirAmir Roshan ZamirAlejandro Torroella, and Mubarak Shah, “GIS-Assisted Object Detection and Geospatial Localization”, in Proceedings of IEEE European Conference on Computer Vision (ECCV), September 2014 [PDF | Project Page]