In this project, we have developed a novel framework for the semantic linking of the news topics. Unlike the conventional video content linking methods based only on the video shots, the proposed framework links the news stories across different sources. The semantic linkage between the news stories is computed based on their visual and textual similarities. The visual similarity is carried on both of the story key-frames with or without faces detected. The textual similarity is computed using the automatic speech recognition (ASR) output of the video sequences. The output of the story linking method can be applied to compute the ranking or interestingness of a news story. The developed method has been tested on a large open-benchmark dataset from TRECVID 2003 by NIST, and very satisfactory results for both of the proposed tasks have been obtained.
Yun Zhai and Mubarak Shah, “Tracking News Stories Across Different Sources“, ACM Multimedia 2005, Singapore, November 6-12.