Skip to main content

Ensemble Modeling for Multimodal Visual Action Recognition

 

Publication

Jyoti Kini, Sarah Fleischer, Ishan Dave, Mubarak Shah, Ensemble Modeling for Multimodal Visual Action Recognition, 22nd International Conference on Image Analysis and Processing Workshops – Multimodal Action Recognition on the MECCANO Dataset, 2023.

Overview

In this work, we propose an ensemble modeling approach for multimodal action recognition. We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO dataset. Based on the underlying principle of focal loss, which captures the relationship between tail (scarce) classes and their prediction difficulties, we propose an exponentially decaying variant of focal loss for our current task. It initially emphasizes learning from the hard misclassified examples and gradually adapts to the entire range of examples in the dataset. This annealing process encourages the model to strike a balance between focusing on the sparse set of hard samples, while still leveraging the information provided by the easier ones. Additionally, we opt for the late fusion strategy to combine the resultant probability distributions from RGB and Depth modalities for final action prediction. Experimental evaluations on the MECCANO dataset demonstrate the effectiveness of our approach. Notably, our method also secured first place at the multimodal action recognition challenge at ICIAP 2023.

Implementation

Given a set of spatiotemporally aligned RBG and Depth sequences, our goal is to predict the action class associated with the sequence. In order to achieve this, we adopt an ensemble architecture comprising two dedicated Video Swin Transformer backbones to process the RGB clip and Depth clip independently. The input video for each modality results in token embeddings. We pass this representation retrieved from the base feature network to our newly added fully connected layer and fine-tune the overall network. The final prediction is derived by averaging the two probability distributions obtained as output from the RGB and Depth pathways.

Evaluations

  • Quantitative Results
  • Our work, declared as the challenge winner, ranks first on the leaderboard for Multimodal Action Recognition on the MECCANO dataset. The best method is shown in Red and the second best method is shown in Blue.

    We, also, report results demonstrating the effectiveness of our focal loss variant with an exponentially decaying modulating factor. Here, CE implies Cross-Entropy loss and * refers to model trained using both train+validation set.