Road-constrained tracking of multiple targets poses a challenge for standard tracking algorithms due to possible
target/road ambiguities. The random set approach accepts the existence of ambiguity and tracks the probability density
associated with each target/road hypothesis. Measurements from multiple sensors are used to update these densities via
random set analogues of the Bayesian filtering equations. Reports from humans have the potential to complement and
augment data provided by sensors. A challenge with incorporating human reports is that the reports' vagueness and
ambiguity lead to many possible interpretations. We propose a method for incorporating human reports into a road-constrained
random set tracker (RST). Our proposed approach involves mapping a human report into multiple plausible
precise measurements. These precise measurements are used to update the global density in a manner similar to the
sensor measurement case. We validated our approach using a simulated road network scenario, consisting of multiple
sensors and targets and a simple human observer model. The human observer's reports contained coarse information
about the number and relative location of the targets within a field of view. These human reports are mapped to multiple
groups of plausible measurements consisting of ranges and bearing angles with large errors. The performance of the
RST with and without the human reports is compared. A quantitative metric indicates that the inclusion of the human
reports increases the belief of the RST in the correct target/road hypothesis.
Feature-aided target verification is a challenging field of research, with the potential to yield significant increases in the
confidence of re-established target tracks after kinematic confusion events. Using appropriate control algorithms
airborne multi-mode radars can acquire a library of HRR (High Range Resolution) profiles for targets as they are
tracked. When a kinematic confusion event occurs, such as a vehicle dropping below MDV (Minimum Detectable
Velocity) for some period of time, or two target tracks crossing, it is necessary to utilize feature-aided tracking methods
to correctly associate post-confusion tracks with pre-confusion tracks. Many current HRR profile target recognition
methods focus on statistical characteristics of either individual profiles or sets of profiles taken over limited viewing
angles. These methods have not proven to be very effective when the pre- and post- confusion libraries do not overlap in
azimuth angle.
To address this issue we propose a new approach to target recognition from HRR profiles. We present an algorithm that
generates 2-D imagery of targets from the pre- and post-confusion libraries. These images are subsequently used as the
input to a target recognition/classifier process. Since, center-aligned HRR Profiles, while ideal for processing, are not
easily computed in field systems, as they require the airborne platform's center of rotation to line up with the geometric
center of the moving target (this is impossible when multiple targets are being tracked), our algorithm is designed to
work with HRR profiles that are aligned to the leading edge (the first detection above a threshold, commonly referred to
as Edge-Aligned HRR profiles).
Our simulated results demonstrate the effectiveness of this method for classifying target vehicles based on simulations
using both overlapping and non-overlapping HRR profile sets. The algorithm was tested on several test cases using an
input set of .28 m resolution XPATCH generated HRR profiles of 20 test vehicles (civilian and military) at various
elevation angles.
This paper describes an integrated approach to sensor fusion and resource management applicable to sensor networks.
The sensor fusion and tracking algorithm is based on the theory of random sets. Tracking is herein considered to be the
estimation of parameters in a state space such that for a given target certain components, e.g., position and velocity, are
time varying and other components, e.g., identifying features, are stationary. The fusion algorithm provides at each
time step the posterior probability density function, known as the global density, on the state space, and the control
algorithm identifies the set of sensors that should be used at the next time step in order to minimize, subject to
constraints, an approximation of the expected entropy of the global density. The random set approach to target tracking
models association ambiguity by statistically weighing all possible hypotheses and associations. Computational
complexity is managed by approximating the posterior Global Density using a Gaussian mixture density and using an
approach based on the Kulbach-Leibler metric to limit the number of components in the Gaussian mixture
representation. A closed form approximation of the expected entropy of the global density, expressed as a Gaussian
mixture density, at the next time step for a given set of proposed measurements is developed. Optimal sensor selection
involves a search over subsets of sensors, and the computational complexity of this search is managed by employing the
Mobius transformation. Field and simulated data from a sensor network comprised of multiple range radars, and
acoustic arrays, that measure angle of arrival, are used to demonstrate the approach to sensor fusion and resource
management.
KEYWORDS: Synthetic aperture radar, Data modeling, 3D modeling, 3D image processing, Image processing, Image resolution, Statistical analysis, Super resolution, Data centers, Computer aided design
A technique to form super-resolved 3D Synthetic Aperture Radar (SAR) images from a limited number of elevation passes is presented in this paper. This technique models the environment as containing a finite number of isotropically radiating, frequency independent point scatterers in Additive White Gaussian Noise (AWGN), and applies a hybrid super-resolution method that yields the Maximum Likelihood (ML) estimates of scatterer strengths and resolves their locations in the data deficient dimension well beyond the Fourier resolution limit.
Hidden Markov models (HMMs) are probabilistic finite state machines that can be used to represent random discrete time data. HMMs produce data through the use of one or more `observable' random processes. An additional `hidden' Markov process controls, which of the `observable' random processes is used to produce an individual data observation. Helicopter radar signatures can be represented as quasi- periodic 1D discrete time series that can be analyzed using HMMs. In the HMM helicopter detection and classification algorithm developed in this study, the states of the `hidden portion' of the HMM were used to represent time dependence alignments between the radar and helicopter rotor structures. For example, the times when specular reflections occur were used to define a `blade-fish' state. Since blade- flash frequency, and the corresponding non-blade-flash state duration, is an important feature in helicopter detection and classification. HMMs that allowed direct specification of state duration probabilities were used in this study. The HMM approach was evaluated using X-Band radar data from military helicopters recorded at Ft. A.P. Hill. After initial adaptive clutter suppression and blade-flash enhancement preprocessing, a set of approximately 1,000 raw in-phase and quadrature data records were analyzed using the HMM approach. A correct target classification rate that varied between 98% for a PRF of 10 KHz to 91% at a 2.5 KHz PRF was achieved.
A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.