Video Image Retrieval and Analysis Tool (VIRAT) Video Data Set
Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen, Jong Taek Lee, Saurajit Mukherjee, J. K. Aggarwal, Hyungtae Lee, Larry Davis, Eran Swears, Xioyang Wang, Qiang Ji, Kishore Reddy, Mubarak Shah, Carl Vondrick, Hamed Pirsiavash, Deva Ramanan, Jenny Yuen, Antonio Torralba, Bi Song, Anesco Fong, Amit Roy-Chowdhury, and Mita Desai, A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video, IEEE Conference on Computer Vision and Pattern Recognition 2011, Colorado Springs, CO, USA, June 21-25, 2011.
Formulated experiments to test different action recognition algorithms on APHill aerial videos provided by DARPA which were recorded using Electro-Optical/Infra-Red (EO/IR) sensor from a military aircraft flying at a height of over 1000 meters. The videos are aligned and the moving objects are tracked using in-house system COCOA.
Integrated two action recognition algorithms “Bag of Visual Words” and “Lagrangian Particle Trajectories” into the VIRAT DARPA system developed by Lockheed Martin and Kitware as part of VIRAT project. The algorithms were converted to C++ for integration.
Sample Aerial Images