KEYWORDS: Sensors, Systems modeling, Cameras, Data modeling, Video, Video surveillance, Detection and tracking algorithms, Feature extraction, RGB color model, Semantic video
Event detection from a video stream is becoming an important and challenging task in surveillance and sentient
systems. While computer vision has been extensively studied to solve different kinds of detection problems over
time, it is still a hard problem and even in a controlled environment only simple events can be detected with a
high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring
in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video
frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence
semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an
instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model.
This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn
a new detection model for the newly evolved system state and to resume monitoring with a higher rate of
accuracy.
KEYWORDS: Video surveillance, Video, RGB color model, Sensors, Cameras, Control systems, Surveillance, Motion models, Video processing, Motion detection
Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity.
This work describes the design and implementation of a real-time, privacy-protecting data collection infrastructure that fuses additional sensor information (e.g. Radio-frequency) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of our data collection framework, and here we describe in detail the real-time video processing techniques used in order to achieve tracking of users in pervasive spaces while utilizing the additional sensor data provided by the various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.
KEYWORDS: Video, Information technology, Sensors, Analytical research, Databases, Geographic information systems, Data storage, Video surveillance, Cameras, Data modeling
This paper provides an overview of Project RESCUE, which aims to enhance the mitigation capabilities of first responders in the event of a crisis by dramatically transforming their ability to collect, store, analyze, interpret, share and disseminate data. The multidisciplinary research agenda incorporates a variety of information technologies: networks; distributed systems; databases; image and video processing; and machine learning, together with subjective information obtained through social science. While the IT challenges focus on systems and algorithms to get the right information to the right person at the right time, social science provides the right context. Besides providing an overview of the nature of RESCUE research activities the paper highlights challenges of particular interest to the internet imaging community.
KEYWORDS: Visualization, Multimedia, Video, RGB color model, Control systems, Visual process modeling, Contrast sensitivity, Calibration, Eye, Data modeling
Multimedia transmission over wide-area networks currently only considers the server and network resource constraints and client device capabilities. It is also essential that the accessibility of the multimedia content for different users with diverse capabilities and disabilities be considered. In this paper we develop a transcoding technique to present the multimedia content to suit diverse disabled user groups by using an ability based classification approach. Using ability-based prioritization, the appropriate alternate modalities and quality levels are chosen to replace the inaccessible modalities. The transcoding process allows for refinements to cater to specific types and degrees of impairments. Our performance results illustrate the benefits of the ability-based transcoding approach.
Proceedings Volume Editor (2)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.