Skip to main content

Dr. Angela Yao

National University of Singapore

Tuesday, June 28, 2022
2:00PM – 3:00PM
ENGI 327 / Zoom

Abstract

Videos of procedural activities are goal-oriented, with multiple steps or actions in a sequence over time. In this talk, I will outline our group’s efforts in developing methods for segmenting and anticipating actions in procedural videos. We take a look at two extreme approaches, one based on unsupervised discovery and the other based on fully-supervised learning from densely labelled videos. We then explore the variants in between, including semi- and weakly-supervised settings. I will conclude by introducing our newly collected dataset Assembly101 – a large-scale multi- static and ego-centric view dataset of people assembling and disassembling toys.

For more info, please follow this link.