Skip to main content

Dr. Mubarak Shah has been named as an Association for Computing Machinery (ACM) Fellow for his contributions to human action recognition in video and leadership for undergraduate research experience.

To read more about this honor, please visit

Dr. Mubarak Shah, the UCF Board of Trustees Chair Professor, is the founding director of Center for Research in Computer Visions at University of Central Florida (UCF). He is a fellow of IEEE, NAI, IAPR, AAAS and SPIE; and a member of Academy of Science, Engineering and Medicine of Florida (ASEMFL). He has published extensively on topics related to visual surveillance, tracking, human activity and action recognition, object detection and categorization, geo registration, visual crowd analysis, etc. He is a recipient of ACM SIGMM Technical Achievement award; ACM SIGMM Test of Time Honorable Mention Award for his paper in Proceedings of the 14th ACM International Conference on Multimedia, MM 06; International Conference on Pattern Recognition (ICPR) 2020 Best Scientific Paper Award; IEEE Outstanding Engineering Educator Award; Harris Corporation Engineering Achievement Award; an honorable mention for the ICCV 2005 Where Am I? Challenge Problem; 2013 NGA Best Research Poster Presentation; 2nd place in Grand Challenge at the ACM Multimedia 2013 conference; and runner up for the best paper award in ACM Multimedia Conference in 2005 and 2010. At UCF he has received Pegasus Professor Award; University Distinguished Research Award; Faculty Excellence in Mentoring Doctoral Students; Scholarship of Teaching and Learning award; Teaching Incentive Program award; and Research Incentive Award.

Shah has made fundamental contributions to human action recognition from video: his pioneering research in video human action recognition laid a foundation for development of the area. Specifically, in recognizing realistic actions from videos “in the wild”, he demonstrated how actions in realistic, unconstrained videos may be accurately recognized. Shah was one of the first to propose view-invariant action recognition method. He with his students showed that human action in videos captured by two cameras from different viewpoints can be correctly matched employing fundamental matrix. His recent work has employed end-to-end deep learning method to achieve view-invariance: instead of employing fundamental matrix, the relationship between views is learnt. Shah developed a spatiotemporal video attention detection technique for detecting the attended regions that correspond to both interesting objects and actions in video sequences. The paper received test of time honorable mention award by SIGMM.

Shah and his graduate students and collaborators have introduced a series of UCF action datasets culminating in the benchmark dataset UCF-101 in 2012. UCF-101 has withstood the test of time—it continues to be and a widely used standard benchmark. The action datasets enabled researchers to train and evaluate their methods on a significantly larger set of realistic videos–contributed to improvements in the state-of-the-art in action recognition as well as popularizing the problem. The UCF dataset series has also played a role in influencing the next generation of large-scale video datasets, such as Kinetics and YouTube-8M. UCF-101 was also extended to create MultiTHUMOS dataset.

Shah’s research on recognizing human actions in a crowd has found application in real-world situations. Shah used the insight that the motion of a high-density crowd appears to behave like a liquid, and modeled Visual Crowd Surveillance through a Hydrodynamics Lens. Shah also developed first of its kind method for counting people in crowds, which was used for counting demonstrators calling for the independence of the Catalonia from Spain in 2015 and 2016.
As an educator, Shah single handedly established Computer Vision at UCF, which was ranked among top ten in US between 2010-2020 by Shah has supervised 48 Ph.D. dissertations to completion. As a project director for the NSF-funded Research Experience for Undergraduates (REU)