Skip to main content

DaMN-Discriminative and Mutually Nearest

What is DaMN?


DaMN-discriminative and mutually nearest is a set of automatically generated category pairs, which augment one-vs-rest classifiers with a judicious selection of “two-vs-rest” classifier outputs. By combining one-vs-rest and two-vs-rest features in a principled probabilistic manner, we achieve state-of-the-art results on the UCF101 and HMDB51 datasets. More importantly, the same DaMN features, when treated as a mid-level representation also outperform existing methods in knowledge transfer experiments, both cross-dataset from UCF101 to HMDB51 and to new categories with limited training data (one-shot and few-shot learning). Finally, we study the generality of the proposed approach by applying DaMN to other classification tasks; our experiments show that DaMN outperforms related approaches in direct comparisons, not only on video action recognition but also on their original image dataset tasks.

Our Approach


Our goal is to automatically identify the set of suitable features F, train their associated classifiers f(:) using the available training data and use the instance-level predictions to classify novel videos. We take one-vs-one SVM margin for every pair of categories to compute the category level similarity. Mutual k-Nearest neighbor can be obtained based on the distances. DaMN category- level feature matrix is composed of two parts: 1) features to separate each category individually (identical to one-vs-rest); 2) pair classifiers designed to separate mutually proximate pairs of categories from the remaining classes. Each column of this matrix define a SVM classifier, and we use a principled probabilistic manner to combine the results from these classifiers together.

Knowledge transfer to novel category

For each new label y’, we determine its similarity to the known categories Y using the small amount of new data. Note that while the data from novel categories may be insufficient to enable accurate training of one-vs-rest classifiers, it is sufficient to provide a rough estimate of similarity between the new category and existing categories. At test time, we obtain scores from the bank of existing DaMN feature classifiers and determine the most likely category through equations states in our paper


The following figure compare the category-level distance metrics:

The following table shows action recognition accuracy on UCF 101

The following table shows action recognition accuracy on HMDB51

The following compares DaMN and THUMOS semantic attributes.

The following shows the result on Animal with Attributes dataset.



Our code for text detection can be downloaded HERE.

Related Publications

Rui HouAmir Roshan ZamirRahul Sukthankar and Mubarak Shah, “DaMN – Discriminative and Mutually Nearest: Exploiting Pairwise Category Proximity for Video Action Recognition”, European Conference on Computer Vision (ECCV), 2014 [PDF | BibTeX]