Paper
19 December 2001 Learning to annotate video databases
Author Affiliations +
Proceedings Volume 4676, Storage and Retrieval for Media Databases 2002; (2001) https://doi.org/10.1117/12.451096
Event: Electronic Imaging, 2002, San Jose, California, United States
Abstract
Model-based approach to video retrieval requires ground-truth data for training the models. This leads to the development of video annotation tools that allow users to annotate each shot in the video sequence as well as to identify and label scenes, events, and objects by applying the labels at the shot-level. The annotation tool considered here also allows the user to associate the object-labels with an individual region in a key-frame image. However, the abundance of video data and diversity of labels make annotation a difficult and overly expensive task. To combat this problem, we formulate the task of annotation in the framework of supervised training with partially labeled data by viewing it as an exercise in active learning. In this scenario, one first trains a classifier with a small set of labeled data, and subsequently updates the classifier by selecting the most informative, or most uncertain subset of the available data-set. Consequently, propagation of labels to yet unlabeled data is automatically achieved as well. The purpose of this paper is primarily twofold. The first is to describe a video annotation tool that has been developed for the purpose of annotating generic video sequences in the context of a recent video-TREC benchmarking exercise. The tool is semi-automatic in that it automatically propagates labels to similar shots, which requires the user to confirm or reject the propagated labels. The second purpose is to show how active learning strategy can be potentially implemented in this context to further improve the performance of the annotation tool. While many versions of active learning could be thought of, we specifically report results on experiments with support vector machine classifiers with polynomial kernels.
© (2001) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Milind Ramesh Naphade, Ching-Yung Lin, John R. Smith, Belle L. Tseng, and Sankar Basu "Learning to annotate video databases", Proc. SPIE 4676, Storage and Retrieval for Media Databases 2002, (19 December 2001); https://doi.org/10.1117/12.451096
Lens.org Logo
CITATIONS
Cited by 47 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Databases

Data modeling

Model-based design

Multimedia

Distance measurement

Image segmentation

Back to Top