Currently all video search engines are text-based, i.e. they search for the text labels associated with any video to retrieve the desired ones. However, this can lead to incorrect or inaccurate results, as labelling or annotating a video is mainly done manually. Consequently, many false positive results are generated during video searches by mislabelled videos.To solve this problem we need to improve the process of video annotation. This can be achieved by automatic annotation of videos based on their actual content, rather than text labels or tags. To accomplish this we need to enable computers to extract video “storylines”, composed by the events or actions taking place in each video. This has the potential to save time and provide better results for online video searches, as well as improve event detection in real-world surveillance footage. The project aims to facilitate Probabilistic Semantic Search and Query Answering by annotating videos in the way described, through machine lea
Munir, Misbah
Supervisors: Cuzzolin, F
Faculty of Technology, Design and Environment
Year: 2016
© The Author(s) Published by Oxford Brookes University