The amount of information available about urban traffic from aerial video imagery is extremely high. Here we discuss the collection of such video imagery from a helicopter platform with a low-cost sensor, and the post-processing used to correct radial distortion in the data and register it. The radial distortion correction is accomplished using a Harris model. The registration is implemented in a two-step process, using a globally applied polyprojective correction model followed by a fine scale local displacement field adjustment. The resulting cleaned-up data is sufficiently well-registered to allow subsequent straight-forward vehicle tracking.
In this paper, we present an algorithm for determining a velocity probability distribution prior from low frame
rate aerial video of an urban area, and show how this may be used to aid in the multiple target tracking problem,
as well as to provide a foundation for the automated classification of urban transportation infrastructure. The
algorithm used to develop the prior is based on using a generic interest point detector to find automobile
candidate locations, followed by a series of filters based on scale and motion to reduce the number of false
alarms. The remaining locations are then associated between frame pairs using a simple matching algorithm,
and the corresponding tracks are then used to build up velocity histograms in the areas that are moved through
between the track endpoints. The algorithm is tested on a dataset taken over urban Tucson, AZ. The results
demonstrate that the velocity probability distribution prior can be used to infer a variety of information about
road lane directions, speed limits, etc..., as well as providing a means of describing environmental knowledge
about traffic rules that can be used in tracking.
We present a method for detecting a large number of moving targets, such as cars and people, in geographically
referenced video. The problem is difficult, due to the large and variable number of targets which enter and leave the field
of view, and due to imperfect geo-projection and registration. In our method, we assume feature extraction produces a
collection of candidate locations (points in 2D space) for each frame. Some of these locations are real objects, but many
are false alarms. Typical feature extraction might be frame differencing, or target recognition. For each candidate
location, and at each time step, our algorithm outputs a velocity estimate and confidence which can be thresholded to
detect objects with constant velocity. In this paper we derive the algorithm, investigate the free parameters, and compare
its performance to a multi-target tracking algorithm.
KEYWORDS: Feature selection, Detection and tracking algorithms, Video, Image registration, Performance modeling, Data modeling, Systems modeling, Image quality, System integration, Dynamical systems
In many tracking applications, adapting the target appearance model over time can improve performance. This approach
is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g.,
orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking
system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In
this paper we describe our efforts to track cars in low frame rate data (1 frame / second), acquired from a highly unstable
airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies
greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model
from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance
of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose
an online feature selection approach to this problem and investigate the performance and computational trade-offs with a
real-world dataset.
Boeing-SVS (BSVS) has been developing a Passive Obstacle Detection System (PODS) under a Small Business Innovative Research (SBIR) contract with NAVAIR. This SBIR will provide image-processing algorithms for the detection of sub-pixel curvilinear features (i.e., power lines, poles, suspension cables, etc). These algorithms will be implemented in the SBIR to run on custom processor boards in real time. As part of the PODS development, BSVS has conducted a study to examine the feasibility of incorporating a passive ranging solution with the obstacle-detection algorithms. This passive ranging capability will not only provide discrimination between power lines and other naturally occurring linear features, but will also provide ranging information for other features in the image. Controlled Flight Into Terrain (CFIT) is a leading cause of both military and civil/commercial rotorcraft accidents. Ranging to other features could be invaluable in the detection of other obstacles within the flight path and therefore the prevention of CFIT accidents. The purpose of this paper is to review the PODS system (presented earlier) and discuss several methods for passive ranging and the performance expected from each of the methods.
Military aircraft fly below 100 ft. above ground level in support of their missions. These aircraft include fixed and rotary wing and may be manned or unmanned. Flying at these low altitudes presents a safety hazard to the aircrew and aircraft, due to the occurrences of obstacles within the aircraft's flight path. The pilot must rely on eyesight and in some cases, infrared sensors to see obstacles. Many conditions can exacerbate visibility creating a situation in which obstacles are essentially invisible, creating a safety hazard, even to an alerted aircrew. Numerous catastrophic accidents have occurred in which aircraft have collided with undetected obstacles. Accidents of this type continue to be a problem for low flying military and commercial aircraft. Unmanned Aerial Vehicles (UAVs) have the same problem, whether operating autonomously or under control of a ground operator. Boeing-SVS has designed a passive, small, low- cost (under $100k) gimbaled, infrared imaging based system with advanced obstacle detection algorithms. Obstacles are detected in the infrared band, and linear features are analyzed by innovative cellular automata based software. These algorithms perform detection and location of sub-pixel linear features. The detection of the obstacles is performed on a frame by frame basis, in real time. Processed images are presented to the aircrew on their display as color enhanced features. The system has been designed such that the detected obstacles are displayed to the aircrew in sufficient time to react and maneuver the aircraft to safety. A patent for this system is on file with the US patent office, and all material herein should be treated accordingly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.