An Efficient 3D CNN for Action/Object Segmentation in Video
Paper: | An Efficient 3D CNN for Action/Object Segmentation in Video |
Contact: | Chen Chen (chen.chen@uncc.edu), Mubarak Shah (shah@crcv.ucf.edu) |
Publication
Rui Hou, Chen Chen, Rahul Sukthankar, Mubarak Shah. An Efficient 3D CNN for Action/Object Segmentation in Video . British Machine Vision Conference (BMVC 2019), UK, Sep 9-10, 2019.
Overview
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art.
Evaluations
- Quantitative Results
- Video object segmentation results of the DAVIS’16 datasets
- Video object segmentation results of the Segtrack-v2 datasets
- Qualitative Results
- Example segmentation results of the proposed approach for a few testing video sequences from DAVIS dataset
- Experiments on J-HMDB for action segmentation/detection