Paper
23 December 1999 Dynamic markers for collaborative discussion on video content
Candemir Toklu, Thomas Fischer, Shih-Ping Liou
Author Affiliations +
Proceedings Volume 3972, Storage and Retrieval for Media Databases 2000; (1999) https://doi.org/10.1117/12.373568
Event: Electronic Imaging, 2000, San Jose, CA, United States
Abstract
In this paper we propose an interactive tool for generating dynamic markers for video object in a distributed video content discussion environment. We address interactive video object selection and real-time video object marker generation which is supported by an automatic object tracking method. The proposed system satisfies the following criteria: (i) automatic object tracking has to be in real- time; (ii) the video object selection has to be carried out with minimal effort and knowledge; (iii) the user has to be notified by the system when the automatic object tracking method encounters problems: and (iv) interactive rectification of the object marker has to be instantaneous and direct. Our experimental results indicate that the proposed tool is very effective and intuitive in creating dynamic object markers for video content on the fly. Automatic object tracking method yields reliable results on a desktop PC in real-time, even with busy background and/or partial occlusion.
© (1999) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Candemir Toklu, Thomas Fischer, and Shih-Ping Liou "Dynamic markers for collaborative discussion on video content", Proc. SPIE 3972, Storage and Retrieval for Media Databases 2000, (23 December 1999); https://doi.org/10.1117/12.373568
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Automatic tracking

Multimedia

Control systems

Detection and tracking algorithms

Filtering (signal processing)

Computing systems

RELATED CONTENT


Back to Top