PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6968, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management I
Contrary to assertions in the literature, we show that the Extended Kalman Filter (EKF) is superior to the Unscented Kalman Filter (UKF) for certain nonlinear estimation problems. In particular, for nonlinearities that are odd functions of the state vector (e.g., x3) the Unscented Kalman Filter usually performs well, whereas for even nonlinearities (e.g., x2), the Extended Kalman Filter is sometimes much better than the Unscented Kalman Filter. This is contrary to the usual engineering folklore, and therefore we have checked our results very thoroughly. In particular, the Unscented Kalman Filter correctly approximates the conditional mean using a 4th order Gauss-Hermite quadrature, in contrast to the Extended Kalman Filter which uses a simple 0th order approximation, but the conditional mean is not the desired estimate in practical applications for strongly bimodal conditional probability densities, which are induced by even nonlinearities, owing to a sign ambiguity. On the other hand, even nonlinearities do not always induce multimodal densities that persist for a significant amount of time, and thus the Unscented Kalman Filter sometimes performs well for such problems. We study the effects of initial uncertainty of the state vector and nonlinearity in measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a simple algorithm for achieving unsupervised spatially distributed object fusion using spatial voting. We
achieve spatial fusion of uncertain position estimates of disparate objects. These objects are portions of larger assemblies
that cannot be directly observed by available sensors. Only the individual objects are discernable. The question arises
how to fuse estimates of position uncertainty of these objects (potential assembly pieces) into an "assembly" whose
location and extent can only be inferred. Our algorithm uses spatial correlation and stacking with voting of positional
uncertainty ellipses. We present the positional algorithm and compare it to other methods of grouping uncertainties and
find encouraging improved performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the battlefield surveillance domain, ground target tracking is used to evaluate the threat. Data used for
tracking is given by a Ground Moving Target Indicator (GMTI) sensor which only detects moving targets.
Multiple target tracking has been widely studied but most of the algorithms have weaknesses when targets are
close together, as they are in a convoy. In this work, we propose a filtering approach for convoys in the midst of
civilian traffic. Inspired by particle filtering, our specific algorithm cannot be applied to all the targets because of
its complexity. That is why well discriminated targets are tracked using an Interacting Multiple Model-Multiple
Hypothesis Tracking (IMM-MHT), whereas the convoy targets are tracked with a specific particle filter. We
make the assumption that the convoy is detected (position and number of targets). Our approach is based on an
Independent Partition Particle Filter (IPPF) incorporating constraint-regions. The originality of our approach
is to consider a velocity constraint (all the vehicles belonging to the convoy have the same velocity) and a group
constraint. Consequently, the multitarget state vector contains all the positions of the individual targets and
a single convoy velocity vector. When another target is detected crossing or overtaking the convoy, a specific
algorithm is used and the non-cooperative target is tracked down an adapted particle filter. As demonstrated
by our simulations, a high increase in convoy tracking performance is obtained with our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is described for predicting the long-term movement of people on the ground, either on foot or
driving vehicles, as a function of the terrain, weather, behavior, and situation (context). It uses the results of
statistical simulations to estimate location probability distributions of where a vehicle or person may go in a
given amount of time. Several applications are discussed including detecting possible gaps in sensor
coverage, route planning, and mobile communications routing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive sonar is widely used in practice to covertly detect maritime vessels. However, the detection of stealthy
vessels often requires active sonar. The risk of the overt nature of active sonar operation can be reduced by
using multistatic sonar techniques. Cheap sonar sensors that do not require any beamforming technique can be
exploited in a multistatic system for spacial diversity. In this paper, Gaussian mixture probability hypothesis
density (GMPHD) filter, which is a computationally cheap multitarget tracking algorithm, is used to track multiple
targets using the multistatic sonar system that provides only bistatic range and Doppler measurements.
The filtering results are further improved by extending the recently developed PHD smoothing algorithm for
GMPHD. This new backward smoothing algorithm provides delayed, but better, estimates for the target state.
Simulations are performed with the proposed method on a 2-D scenario. Simulation results present the benefits
of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a simple method of target maneuver indication (TMI) from high range resolution (HRR) radar
measurements. The HRR TMI (HTMI) relates the slope of a target's range-Doppler image to the underlying turn rate
when the target undergoes a turn maneuver. As an intermediate product of range profile formation process of an HRR
radar, this approach provides an easy and quick indication of target maneuverability and, under favorable conditions, an
estimate of such a maneuver (the turn rate and turn radius). The target maneuver indication can be incorporated into a
target tracker to determine whether the target is decelerating or accelerating and to estimate the curvature of a turn so as
to improve tracking accuracy. In this paper, we first formulate the target maneuver indicator from HRR radar
measurements. Various methods for slope and slope rate estimation are then presented. The simulation environment is
described together with the software tools used to generate target RF signatures. Simulation results are presented to show
the operation and performance of the simple maneuver indicator for various encounter scenarios. Finally, the use of the
HMTI method for maneuvering target tracking is discussed with performance metrics of timeliness, tracking sensitivity,
and track accuracy as well as target identity confidence when HRR range profiles are also used for identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the effect of sensor bias error on the tracking quality of a space-based infrared (IR) tracking system
that utilizes a Linearized Kalman Filter (LKF) for the highly non-linear problem of tracking a ballistic missile. The
tracking system consists of two satellites flying in a lead-follower formation tracking a ballistic target. Each satellite is
equipped with an IR sensor that provides azimuth and bearing to the target. The tracking problem is made more difficult
due to a constant, non-varying or slowly varying bias error present in each sensor's line of sight measurements. The
effect of this error on the state vector estimation is explored using different values for sensor accuracy and various
degrees of uncertainty of the target and platform dynamic. Scenarios are created using Satellite Toolkit for trajectories
with associated sensor observations. Mean Square Error results are given for tracking during the period when the target
is in view of the satellite IR sensors. The results of this research provide insight into the accuracy requirements of the
sensors and the suitability of the LKF estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
A fuzzy logic resource manager (RM) that enables a collection of unmanned aerial vehicles (UAVs) to automatically
cooperate to make meteorological measurements will be discussed. The RM renders the UAVs autonomous allowing
them to change paths and cooperate without human intervention. Innovations related to the "priority for helping" (PH)
fuzzy decision tree (FDT) used by the RM will be discussed. The PH FDT permits three types of automatic cooperation
between the UAVs. A subroutine of the communications routing algorithm (CRA) used by the RM is also examined.
The CRA allows the UAVs to reestablish communications if needed by changing their behavior. A genetic program
(GP) based procedure for automatically creating FDTs is briefly described. A GP is an algorithm based on the theory of
evolution that automatically evolves mathematical expressions or computer algorithms. The GP data mines a scenario
database to automatically create the FDTs. A recently invented co-evolutionary process that allows improvement of the
initially data mined FDT will be discussed. Co-evolution uses a genetic algorithm (GA) to evolve scenarios to augment
the GP's scenario database. The GP data mines the augmented database to discover an improved FDT. The process is
iterated ultimately evolving a very robust FDT. Improvements to the PH FDT offered through co-evolution are
discussed. UAV simulations using the improved PH FDT and CRA are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As sensors become more specialized, more powerful, and ubiquitous, previous fixed scheduling methods and
even adaptive algorithms become less effective, particularly in a stressing environment in which decisions must be
made as to how to most effectively allocate sensing resources. Managing sensors based on maximizing the expected
information value rate (EIVR) is implemented in our multi-target, distributed, information based sensor management
simulation of a 6-DOF forward air defense (FAD) environment. This generalized approach of maximizing the flow of
valued information into our model of the world which is used by the mission manager to make decisions better solves
the problem of sensor management since rule based systems cease to perform optimally in a non-stationary
environment and their performance does not degrade gracefully. This paper discusses simulation results demonstrating
the benefits and limitation of our information based approach to sensor management. It also details the scenarios,
performance metrics, and results of the comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we consider the problem of collaborative management of uninhabited aerial vehicles (UAVs) for multitarget tracking. In addition to providing a solution to the problem of controlling individual UAVs, we present a method for controlling the information flow among them. The latter provides a solution to one of the main
problems in decentralized tracking, namely, distributed information transfer and fusion among the participating platforms. The problem of decentralized cooperative control considered in this paper is an optimization of the information obtained by a number of UAVs, carrying out surveillance over a region, which includes a number
of confirmed and suspected moving targets with the goal to track confirmed targets and detects new targets in the area. Each UAV has to decide on the most optimal path with the objective to track as many targets as possible, maximizing the information obtained during its operation with the maximum possible accuracy at the
lowest possible cost. Limited communication between UAVs and uncertainty in the information obtained by each UAV regarding the location of the ground targets are addressed in the problem formulation. In order to handle these issues, the problem is presented as an operation of a group of decision makers. Markov Decision Processes (MDPs) are incorporated into the solution. A decision mechanism for collaborative distributed data fusion
provides each UAV with the required data for the fusion process while substantially reducing redundancy in the information flow in the overall system. We consider a distributed data fusion system consisting of UAVs that are decentralized, heterogenous, and potentially unreliable. Simulation results are presented on a representative multisensor-multitarget tracking problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target recognition (ATR) system performance over various operating conditions is of great interest in military
applications. The performance of ATR system depends on many factors, such as the characteristics of input data, feature
extraction methods, and classification algorithms. Generally speaking, ATR performance evaluation can be performed
either theoretically or empirically. The theoretical evaluation method requires reasonably accurate underlying models for
characterizing target/clutter data, which in many cases is unavailable. The empirical (experimental) evaluation method,
on the other hand, needs a fairly large data set in order to conduct meaningful experimental tests. In this paper, we
present experimental performance evaluation of ATR algorithms using the Moving and Stationary Target Acquisition
and Recognition (MSTAR) data set. We conduct a comprehensive analysis of the ATR performance under different
operating conditions. In the experimental tests, different feature extraction techniques, Principle Component Analysis
(PCA), Linear Discriminant Analysis (LDA) and kernel PCA, are employed on target SAR imagery to reduce the feature
dimension. A number of classification approaches, Nearest Neighbor, Naive Bayes, Support Vector Machine are tested
and compared for their classification accuracy under different conditions such as various feature dimensions, target
classes, feature selection methods and input data quality. Our experimental results provide a guideline for selecting
features and classifiers in ATR system using synthetic aperture radar (SAR) imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic target detection (ATD) systems using imaging sensors have played a critical role in site monitoring,
surveillance, and object tracking. Although numerous research efforts and systems have been designed to quickly detect
and recognize missile-like flying targets in cluttered environments, detection of flying targets from a long distance and
large format imagery data is still a challenge. The accuracy of target detection and recognition will greatly affect the
performance of the target tracking system. In this paper, we propose a novel framework to quickly detect missile-like
flying targets in a time-efficient manner. The framework is based on a coarse-to-fine strategy and consists of five
components executed in a sequential order: (1) A rapid clustering operation performs fast image segmentation; (2) based
on the segmentation results of three neighboring image frames, motion analysis identifies the regions of interest which
contain the flying targets; (3) a specially-designed double-threshholding operator precisely segments the moving targets
from the regions of interest; (4) a binary connectivity filter enhances the detected targets and removes the target noise;
and (5) a contour method analyzes the boundary of the detected targets for verification. To test the proposed approach,
a state-of-the-art 3D modeling and animation software tool was used to simulate target flight and attack. Experimental
results, obtained from the electro-optical (EO) images generated from the 3D simulations, illustrate a wide variety of
target and clutter variability, and demonstrate the effectiveness and robustness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The general demand for the prevention of collateral damages in military operations requires methods of robust automatic
identification of target objects like vehicles especially during target approach. This requires the development of
sophisticated techniques for automatic and semi-automatic interpretation of sensor data. In particular the automatic pre-analysis
of reconnaissance data is important for the human observer as well as for autonomous systems. In the phase of
target approach fully automatic methods are needed for the recognition of predefined objects. For this purpose
appropriate sensors are used like imaging IR sensors suitable for day/night operation and laser radar supplying 3D
information of the scenario. Classical methods for target recognition based on comparison with synthetic IR object
models imply certain shortcomings, e.g. unknown weather conditions and the engine status of vehicles.
We propose a concept of generating efficient 2D templates for IR target signatures based on the evaluation of a precise
3D model of the target generated from real multisensor data. This model is created from near-term laser range and IR
data gathered by reconnaissance in advance to gain realistic and up-to-date target signatures. It consists of the visible part
of the object surface textured with measured infrared values. This enables recognition from slightly differing viewing
angles. Our test bed is realized by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, and
IMU). Results are demonstrated by the analysis of a complex scenario with different vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The literature is replete with assisted target recognition (ATR) techniques, including methods for ATR evaluation. Yet,
relatively few methods find their way to use in practice. Part of the problem is that the evaluation of an ATR may not go
far enough in characterizing its optimal use in practice. For example, a thorough understanding of a method's operating
conditions is crucial, e.g., performance across different sensor capabilities, scene context, target occlusions, etc. This
paper describes a process for a rigorous evaluation of ATR performance, including a sensitivity analysis. Ultimately, an
ATR algorithm is deemed valuable if it is actually utilized in practice by users. Thus, quantitative analysis alone is not
necessarily sufficient. Qualitative user assessment derived from user testing, surveys, and questionnaires is often needed
to provide a more complete interpretation of an evaluation for a particular method. We demonstrate our ATR evaluation
process using methods that perform target detection of civilian vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of Networked Automatic Target Recognition systems for surveillance applications is continuously
increasing. Because of the requirement of a low cost and limited payload these networks are traditionally equipped
with lightweight, low-cost sensors such as Electro Optical or Infrared sensors. The quality of imagery acquired by
these sensors critically depends on the environmental conditions, type and characteristics of sensors, and absence
of occluding or concealing objects. In the past a large number of efficient detection, tracking, and recognition
algorithms have been designed to operate on imagery of good quality. However, detection and recognition limits
under non-ideal environmental and/or sensor based distortions have not been carefully evaluated.
This work describes a real image dataset formed by imaging 10 die cast models of military vehicles at different
elevation and orientation angles. The dataset contains imagery acquired both indoors and outdoors. The indoors
dataset is composed of clear and distorted images. The distortions include defocus blur, sided illumination, low
contrast, shadows and occlusions. All images in this dataset, however, have a uniform blue background. The
indoors dataset is applied to evaluate the degradations of recognition performance due to camera and illumination
effects. The recognition method is based on Bessel K forms. The dataset collected outdoors includes real
background and is much more complex to process. This dataset is used to evaluate performance of a fully
automatic target recognition system that involves a Haar-based detector to select potential regions of interest
within images; performs adjustment and fusion of detected regions; segments potential targets using a region
based approach; identifies targets using Bessel K form-based encoding; and performs clutter rejection. The
numerical results demonstrate that the complexity of the background and the presence of occlusions lead to
substantial detection and recognition performance degradations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a multi-agent system which uses swarming techniques to perform high
accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can
co-operatively share the information from low-resolution images of different looks and use this information
to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV)
systems-based approach is proposed which integrates the processing capabilities, combines detection
reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual
sensor system performance levels. We employ real-time block-based motion analysis and compensation
scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the
effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only
bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover
parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple
mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR
system utilizes the homographies between the sensors induced by the ground plane to overlap the local
observation with the received images from other UAVs. The motion of the UAVs distorts estimated
homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address
this, by using the homography decomposition and the ground plane surface estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant multipath propagation and presence of heavy clutter in indoor environments imposes severe limitations on
through-the-wall radar imaging. It is highly desirable to properly interpret the radar images and determine the contents of
the indoor scene with a high level of confidence. Data collected from multiple viewpoints around a structure can be used
to improve imaging visibility into the indoor scene, which, in turn, enhances indoor target detection and localization for
urban sensing applications. In this paper, we consider multi-viewpoint radar imaging and present image registration and
fusion techniques for combining synthetic aperture radar images acquired from multiple locations along one or more
sides of an enclosed structure. Supporting results, based on real-data collected behind concrete walls in a semi-controlled
laboratory environment, are provided for demonstrating the improved performance of the multiple viewpoint scheme
compared to operation from a single location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To perform multi-sensors simulations, the French DGA/DET (Directorate for Technical Evaluation of the French
Ministry of Defense) uses CHORALE (simulated Optronic Acoustic Radar battlefield). CHORALE enables the user to
create virtual and realistic multi spectral 3D scenes, and generates the physical signal received by one or several sensors,
typically an IR sensor or an acoustic sensor. This article presents different kinds of scenario such as desert, urban, campaign place to evaluate intelligent artillery ammunition. The ammunition described has to detect thermal contrast above a target area for the detection capability and firing decision. The scene is as realistic as possible to give to french army good parameters according to their request in a
typical operational scenario. That includes background with trees, houses, roads, fields, targets, dunes, with different
materials such as grass, sand, rock, wood, concrete... All object in the 3D scene is characterized by optronic parameters
used by CHORALE Workbench. The signal provided by this workbench is adapted by AMOCO workbench which allows defining sensor technology. Then a process unit provides the firing decision and the lethality data from the target which has been reached. Each tool is explained to understand the physics phenomena in the scene to take into account atmospheric transmission, radiative parameters of objects and counter-measure devices. Finally, this paper shows results by coupling the operational scenario with sensor model to do a global simulation in
order to determine global performances of artillery ammunition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
The probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters were introduced in 2000
and 2006, respectively, as approximations of the full multitarget Bayes detection and tracking filter. Both filters
are based on the "standard" multitarget measurement model that underlies most multitarget tracking theory -
namely, that sensor measurements are detections. Other sensors, however, collect measurements that are not
detections. This paper describes the extensions of the CPHD filter concept to a nonstandard bearing-only sensor
model proposed by Vihola.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint search and sensor management for space situational awareness presents daunting scientific and practical
challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We
demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of
Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability
Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites
are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting
scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects
(RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a
previously developed and tested sensor management objective function, the Posterior Expected Number of Targets
(PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor
platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling
this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible
(SBV) sensor and ground based radars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
Road-constrained tracking of multiple targets poses a challenge for standard tracking algorithms due to possible
target/road ambiguities. The random set approach accepts the existence of ambiguity and tracks the probability density
associated with each target/road hypothesis. Measurements from multiple sensors are used to update these densities via
random set analogues of the Bayesian filtering equations. Reports from humans have the potential to complement and
augment data provided by sensors. A challenge with incorporating human reports is that the reports' vagueness and
ambiguity lead to many possible interpretations. We propose a method for incorporating human reports into a road-constrained
random set tracker (RST). Our proposed approach involves mapping a human report into multiple plausible
precise measurements. These precise measurements are used to update the global density in a manner similar to the
sensor measurement case. We validated our approach using a simulated road network scenario, consisting of multiple
sensors and targets and a simple human observer model. The human observer's reports contained coarse information
about the number and relative location of the targets within a field of view. These human reports are mapped to multiple
groups of plausible measurements consisting of ranges and bearing angles with large errors. The performance of the
RST with and without the human reports is compared. A quantitative metric indicates that the inclusion of the human
reports increases the belief of the RST in the correct target/road hypothesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extensions to a previously developed service-based fusion process model are presented. The model
accommodates (1) traditional sensor data and human-generated input, (2) streaming and non-streaming data feeds, and
(3) the fusion of both physical and non-physical entities. More than a dozen base-level fusion services are identified.
These services provide the foundation functional decomposition of levels 0 - 2 in JDL fusion model. Concepts, such as
clustering, link analysis and database mining, that have traditionally been only loosely associated with the fusion
process, are shown to play key roles within this fusion framework. Additionally, the proposed formulation extends the
concepts of tracking and cross-entity association to non-physical entities, as well as supports effective exploitation of a
priori and derived context knowledge. Finally, the proposed framework is shown to support set theoretic properties, such
as equivalence and transitivity, as well as the development of a pedigree summary metric that characterizes the
informational distance between individual fused products and source data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target Obscuration, including foliage or building obscuration of ground targets and landscape or horizon obscuration
of airborne targets, plagues many real world filtering problems. In particular, ground moving target
identification Doppler radar, mounted on a surveillance aircraft or unattended airborne vehicle, is used to detect
motion consistent with targets of interest. However, these targets try to obscure themselves (at least partially)
by, for example, traveling along the edge of a forest or around buildings. This has the effect of creating random
blockages in the Doppler radar image that move dynamically and somewhat randomly through this image.
Herein, we address tracking problems with target obscuration by building memory into the observations,
eschewing the usual corrupted, distorted partial measurement assumptions of filtering in favor of dynamic Markov
chain assumptions. In particular, we assume the observations are a Markov chain whose transition probabilities
depend upon the signal. The state of the observation Markov chain attempts to depict the current obscuration and
the Markov chain dynamics are used to handle the evolution of the partially obscured radar image. Modifications
of the classical filtering equations that allow observation memory (in the form of a Markov chain) are given. We
use particle filters to estimate the position of the moving targets. Moreover, positive proof-of-concept simulations
are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
In this paper, we investigate sensor fusion along three avenues: statistical, biological, and categorical. The first two
approaches are analyzed simultaneously to provide a precise and rigorous sensor fusion methodology. The statistical
model currently enhances Bayesian methods for tracking, and suggests further application to target identification and
fusion - involving both low level feature extraction and higher level sensor output combination. The biological model is
also applied to multiple levels of the fusion problem. On the lowest level, it utilizes biologically-inspired results for
improved feature extraction. On the higher levels, it develops biologically-inspired evolutionary and agency algorithms
for sensor output combination and sensor network analysis. Ultimately, we model the entire fusion process with category
theory. Category theory allows for the application of advanced mathematical theory to fusion analysis. In addition to
using category theory as a modeling tool, in this paper we adapt categorical logic via topos theory to provide an
advanced framework for decision fusion - initially using the topos of graphs. Graphs are a simpler representation. We
suggest formulations which will be richer - toward the goal of a theoretically robust and computationally practical sensor
fusion system for assisted/automatic target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is no universally accepted methodology to determine how much confidence one should have in a classifier
output. This research proposes a framework to determine the level of confidence in an indication from a classifier
system where the output is a measurement value. There are two types of confidence developed in this paper. The
first is confidence in a classification system or classifier and is denoted classifier confidence. The second is the
confidence in the output of a classification system or classifier. In this paradigm, we posit that the confidence in
the output of a classifier should be, on average, equal to the confidence in the classifier as a whole (i.e., classifier
confidence). The amount of confidence in a given classifier is estimated using multiattribute preference theory
and forms the foundation for a quadratic confidence function that is applied to posterior probability estimates.
Classifier confidence is currently determined based upon individual measurable value functions for classification
accuracy, average entropy, and sample size, and the form of the overall measurable value function is multilinear
based upon the assumption of weak difference independence. Using classifier confidence, a quadratic function is
trained to be the confidence function which inputs a posterior probability and outputs the confidence in a given
indication. In this paradigm, confidence is not equal to the posterior probability estimate but is related to it.
This confidence measure is a direct link between traditional decision analysis techniques and traditional pattern
recognition techniques. This methodology is applied to two real world data sets, and results show the sort of
behavior that would be expected from a rational confidence measure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications IV
One of the most critical challenges in distributed data fusion is the avoidance of information double counting (also called
"data incest" or "rumor propagation"). This occurs when a node in a network incorporates information into an
estimate - e.g. the position of an object - and the estimate is injected into the network. Other nodes fuse this estimate
with their own estimates, and continue to propagate estimates through the network. When the first node receives a fused
estimate from the network, it does not know if it already contains its own contributions or not. Since the correlation
between its own estimate and the estimate received from the network is not known, the node can not fuse the estimates
in an optimal way. If it assumes that both estimates are independent from each other, it unknowingly double counts the
information that has already being used to obtain the two estimates. This leads to overoptimistic error covariance
matrices. If the double-counting is not kept under control, it may lead to serious performance degradation. Double
counting can be avoided by propagating uniquely tagged raw measurements; however, that forces each node to process
all the measurements and precludes the propagation of derived information. Another approach is to fuse the information
using the Covariance Intersection (CI) equations, which maintain consistent estimates irrespective of the cross-correlation
among estimates. However, CI does not exploit pedigree information of any kind. In this paper we present an
approach that propagates multiple covariance matrices, one for each uncorrelated source in the network. This is a way to
compress the pedigree information and avoids the need to propagate raw measurements. The approach uses a generalized
version of the Split CI to fuse different estimates with appropriate weights to guarantee the consistency of the estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A distributed data fusion system consists of a network of sensors, each capable of local processing and fusion of
sensor data. There has been a great deal of work in developing distributed fusion algorithms applicable to a network
centric architecture. Currently there are at least a few approaches including naive fusion, cross-correlation fusion,
information graph fusion, maximum a posteriori (MAP) fusion, channel filter fusion, and covariance intersection
fusion.
However, in general, in a distributed system such as the ad hoc sensor networks, the communication architecture is
not fixed. Each node has knowledge of only its local connectivity but not the global network topology. In those
cases, the distributed fusion algorithm based on information graph type of approach may not scale due to its
requirements to carry long pedigree information for decorrelation.
In this paper, we focus on scalable fusion algorithms and conduct analytical performance evaluation to compare
their performance. The goal is to understand the performance of those algorithms under different operating
conditions. Specifically, we evaluate the performance of channel filter fusion, Chernoff fusion, Shannon Fusion,
and Battachayya fusion algorithms. We also compare their results to Naïve fusion and "optimal" centralized fusion
algorithms under a specific communication pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pearl's traditional message passing algorithm developed in 1980s is the first exact inference
algorithm for Bayesian networks (BNs). Although it originally was developed for discrete
polytree networks only, it has been used widely in networks with loops by providing approximate
solutions. In such case, messages propagated in the loops are not exact and this
method is called loopy belief propagation. Loopy propagation usually converges and when
it converges, it provides good approximate solutions. However, when dealing with arbitrary
continuous Bayesian networks, message representations and manipulations need special cares
because continuous vairables may have arbitrary distributions and their dependency relationships
could be nonlinear. In this paper, we propose a loopy message passing mechanism
for arbitrary continuous Bayesian networks called Unscented Message Passing (UMP-BN).
UMP-BN combines Pearl's algorithm and an efficient nonlinear transformation technique
called Unscented Transformation to provide estimates of the first two moments of the posterior
distributions for any hidden continuous variable. We study its convergence properties by
investigating various typical situations with different networks. The numerical experiments
show promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we focus on the relationship between the Dempster-Shafer (DS) and Bayesian evidence accumulation.
While it is accepted that the DS theory is, in a certain sense, a generalization of the probability theory, the approaches
vary in several important respects, including the treatment of uncertain information and the way the evidence is
combined, making direct comparison of results of the two analyses difficult. In this work we ameliorate these
difficulties by proposing a mathematical framework within which the relationship between the two methods can be made
precise. The findings of the investigation elucidate the role uncertainty plays in the DS theory and enable evaluation of
relative fitness of the two techniques for practical data fusion scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications V
In many fusion problems, such as Level 2 (situational assessment) or Level 3 (impact assessment), observations
frequently provide indirect, rather than direct, evidence. In such cases, the measurements affect the evidence level of
interest through a functional relationship, such as speed being measured through the functional relationship between it
and position observations over time. A general evidence accrual system that incorporates indirect observations into the
evidence generation is developed. The technique, based on the concepts of first-order and reduced-order observer
theory, can incorporate both observation quality and level of doctrine understanding in the uncertainty measure of the
evidence. The technique does use a network structure with links and propagation of evidence, but, unlike a Bayesian
taxonomy, it does not rely upon the strict probabilistic underpinnings. In this work, to demonstrate its proof of
capability, the technique is applied to a force-on-force Level 2 fusion problem. The technique, based upon a Level 1
fusion target classification evidence accrual algorithm, uses a fuzzy Kalman filter to inject new evidence into the nodes
of interest to modify the level of evidence. The fuzzy Kalman allows for the level of evidence to incorporate an
uncertainty or quality measure into the report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even though the definition of the Joint Director of Laboratories (JDL) "fusion levels" were established in 1987,
published 1991, revised in 1999 and 2004, the meaning, effects, control and optimization of interactions among the
fusion levels have not as yet been fully explored and understood. Specifically, this is apparent from the abstract JDL
definitions of "Levels 2/3 Fusion" - situation and threat assessment (SA/TA), which involve deriving relations among
entities, e.g., the aggregation of object states (i.e., classification and location) in SA, while TA uses SA products to
estimate/predict the impact of actions/interactions effects on situations taken by the participant entities involved. Given
all the existing knowledge in the information fusion and human factors literature, (both prior to and after the introduction
of "fusion levels" in 1987) there are still open questions remaining in regard to implementation of knowledge
representation and reasoning methods under uncertainty to afford SA/TA. Therefore, to promote exchange of ideas and
to illuminate the historical, current and future issues associated with Levels 2/3 implementations, leading experts were
invited to present their respective views on various facets of this complex problem. This paper is a retrospective
annotated view of the invited panel discussion organized by Ivan Kadar (first author), supported by John Salerno, in
order to provide both a historical perspective of the evolution of the state-of-the-art (SOA) in higher-level "Levels 2/3"
information fusion implementations by looking back over the past ten or more years (before JDL), and based upon the
lessons learned to forecast where focus should be placed to further enhance and advance the SOA by addressing key
issues and challenges. In order to convey the panel discussion to audiences not present at the panel, annotated position
papers summarizing the panel presentation are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel approach to predictive situation awareness that leverages human
insight to enhance the forecasting abilities of Fusion levels 2 and 3. Existent
technologies fail to support predictive and impact modeling under realistic conditions,
particularly when there exist few historic exemplars on which to base inferences or when
full awareness of the situation includes unobservable elements. We report on our
ongoing efforts to develop FutureFusion, a collaborative system that builds predictive
awareness and enables futurists to visualize paths to possible futures and formulate
predictions on the ultimate outcome of scenarios of interest. FutureFusion's human
interpretable knowledge representation is unique in its ability to capture qualitative
descriptions of possible futures and quantify them to build computational models.
Further, FutureFusion captures both popular consensus as well as high-risk outliers,
thereby reducing the potential for surprise. Finally, by efficiently diversifying the
modeling process across a heterogeneous and distributed community of experts, this
approach avoids the common pitfalls of more traditional modeling approaches.
FutureFusion helps to cast light on blindspots, mitigate human biases, and maintain a
holistic, up-to-date predictive and impact awareness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved
image from a series of low-resolution images via between-frame subpixel image
registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then
apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System
(UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles
and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This
algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is
first built from the original video data by image registration and bi-cubic interpolation between a
fixed reference frame and every additional frame. It is well known that the median filter is robust to
outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can
restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm,
unlike traditional approaches based on highly-computational iterative algorithms. Experimental
results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very
efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution
algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both
strong efficiency and robustness, as well as good visual performance. This is particularly useful for
the application of super-resolution to UAS surveillance video, where real-time processing is highly
desired.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traditional super-resolution methods, researchers generally assume that accurate
subpixel image registration parameters are given a priori. In reality, accurate image registration on
a subpixel grid is the single most critically important step for the accuracy of super-resolution
image reconstruction. In this paper, we introduce affine invariant features to improve subpixel
image registration, which considerably reduces the number of mismatched points and hence makes
traditional image registration more efficient and more accurate for super-resolution video
enhancement. Affine invariant features are invariant to affine transformations, including scale,
rotation, and translation. They are extracted from the second moment matrix through the
integration and differentiation covariance matrices. The experimental results show that affine
invariant interest points are more robust to perspective distortion and present more accurate
matching than traditional Harris/SIFT corners. In our experiments, all matching affine invariant
interest points are found correctly. In addition, for the same super-resolution problem, we can use
much fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution
results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Realtime multisensor image registration algorithms must be computationally efficient. Often, simplifying assumptions
are made to reduce computational time. However, these simplifications usually trade registration convergence
performance for reduced runtime. For non-realtime applications where computational resources are not severely limited,
this tradeoff may be reversed to improve convergence performance at the expense of increased computational cost. To
this end we introduce a smart iterative approach to minimize mis-registrations and thus optimize registration
convergence probability. The approach involves performing a registration sweep over a smart sampling of parameters
governing feature generation. This approach involves use of two components; a feature sensitivity measure (FSM) and a
registration verification metric (VM). The FSM measures the effect of parameter values on feature set variability. This
measure enables choice of a suitable parameter sampling density to use for performing iterative registration solution
search. The VM provides feedback on the registration solution verity in the absence of ground truth and is used to
identify a converged solution. First, we provide an overview of the registration framework used to generate convergence
results. Next we introduce the FSM and present mathematical properties. We then describe the VM and present the
iterative algorithm. We present numerical results illustrating FSM convergence with increasing parameter sampling
density for Canny edge features in SAR imagery. We illustrate use of FSM convergence behavior to select a suitable
parameter sampling density for use in the iterative algorithm. Finally, SAR-to-EO registration performance results are
presented showing improved convergence probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Working with New York data as a representative and instructive example, we fuse aerial ladar imagery with satellite
pictures and Geographic Information System (GIS) layers to form a comprehensive 3D urban map. Digital photographs
are then mathematically inserted into this detailed world space. Reconstruction of the photos' view frusta yields their
cameras' locations and pointing directions which may have been a priori unknown. It also enables knowledge to be
projected from the urban map onto georegistered image planes. For instance, absolute geolocations can be assigned to
individual pixels, and GIS annotations can be transferred from 3D to 2D. Moreover, such information propagates among
all images whose view frusta intercept the same urban map location. We demonstrate how many imagery exploitation
challenges (e.g. identify objects in cluttered scenes, select all photos containing some stationary ground target, etc)
become mathematically tractable once a 3D framework for analyzing 2D images is adopted. Finally, we close by briefly
discussing future applications of this work to photo-based querying of urban knowledge databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial data sharpening techniques that fuse images and maps are described. The
statistical basis of these techniques are reviewed and extended for sharpening
other kinds of spatial data that can be difficult to collect in denied areas. One
example is demographic data. We demonstrate the ability to derive high-resolution
population maps from county or district census data and Landsat
imagery that is accurate to within 5% of the true population within a test area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is believed that the fusion of multiple different images into a single image should be of great benefit to Warfighters
engaged in a search task. As such, more research has focused on the improvement of algorithms designed for image
fusion. Many different fusion algorithms have already been developed; however, the majority of these algorithms have
not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of
this research is to apply a visual performance-based assessment methodology to assess four algorithms that are
specifically designed for fusion of multispectral digital images. The image fusion algorithms used in this study included
a Principle Component Analysis (PCA) based algorithm, a
Shift-invariant Wavelet transform algorithm, a Contrast-based
algorithm, and the standard method of fusion, pixel averaging. The methodology used has been developed to
acquire objective human visual performance data as a means of evaluating the image fusion algorithms. Standard
objective performance metrics, such as response time and error rate, were used to compare the fused images versus two
baseline conditions comprising each individual image used in the fused test images (an image from a visible sensor and
a thermal sensor). Observers completed a visual search task using a spatial-forced-choice paradigm. Observers
searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the
screen the target was located. Response time and percent correct were measured for each observer. Results of this
study and future directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed sensing (CS) has recently attracted much interest because of its important offerings and versatility.
High-resolution radar imaging applications such as through-the-wall radar (TWR) imaging or inverse synthetic
aperture radar (ISAR) are two key application areas that can greatly benefit from CS. Both applications require
probing targets using radar signals with large bandwidth for collecting, and then processing, a large number of
data samples for achieving high resolution imaging. These applications are also characterized by sparse imaging
where targets of interest are few and have larger cross-section than clutter objects. Reducing the number of
samples without compromising the imaging quality reduces the acquisition time and saves signal bandwidth.
This reduction is important when surveillance is performed within small time window and when targets are
required to remain stationary without translation or rotation motions, to avoid blurring and smearing of images.
In this paper, we discuss applicability of compressed sensing to indoor radar imaging, using synthesized TWR
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses two problems commonly associated with video target tracking system. First, video target
detection and tracking usually require extensive searching in a large space to find the best matches for preregistered
templates. Existing fast search methods cannot guarantee a global optimal match, which results in
substandard performance. To obtain a true global match, a full search at the pixel or sub-pixel level is required.
Obviously, this introduces significant computational overhead, which limits the implementation of these algorithms
in real-time applications. In this paper, we propose a fast method to compute two-dimensional normalized
cross-correlations to efficiently find the global optimal match result from a large image area. Comparisons and
complexity analysis are provided to show the efficiency of the proposed algorithm. Second, another challenge
commonly faced by detection and tracking systems is the accurate detection of target orientation in a twodimensional
image. This problem is motivated by applications where the walk-in and walk-out people need to
be detected and a fast image registration method is needed to compensate the change in rotation, translation
and size, which is natural since the target's distance from the camera is changing dramatically. To address this
issue, we propose a novel and efficient eigenvector-based method to detect target orientation and apply it into
automatic human recognition system. Experimental and real-world test results verify that the proposed fast
algorithm achieves similar accuracy as the recursive registration method which is computationally expensive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on applying an interval recursive least-squares (RLS) filter to a video target tracking problem.
An RLS filter can be sensitive to variations in filter parameters and disturbance to state observations to make the
solutions impractical in practical problems. Specially, in the application of video target tracking using an RLS
filter, inaccurate parameters in the affine model may result in noticeable deviations from true target positions
to lose the target. To make results robust, each filter parameter and state observation is allowed to vary in an
interval. Motivated by this idea, an interval RLS filter is proposed to produce state estimation and prediction
by narrow intervals. Simulations show that an interval RLS filter is robust to state and observation noise and
variations in filter parameters and state observations, and outperforms an interval Kalman filter. Using an
interval RLS filter, a video target tracking algorithm is developed to estimate the target position in each frame.
The proposed tracking algorithm using an interval RLS filter is robust to noise in video sequences and error of
the affine models, and outperforms that using an RLS filter. Performance evaluations using real-world video
sequences are provided to demonstrate effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an optical fiber Sagnac interferometer, a fiber coupler splits the light into two paths that are connected to the
opposite ends of a fiber loop. Because the clockwise path and the anticlockwise path see the same environment,
the interferometer is always balanced. For measuring rotation and displacement, we place a small loop of
a calculated length in the fiber that causes the polarization to be different for the clockwise and anticlockwise
paths, thus frustrating the interferometer and reducing the output signal by an amount dependent on the rotation
of the small coil. The top of the coil is displaced sideways on rotation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sea clutter, the radar backscatter from the ocean surface, has been observed to be highly non-Gaussian. K distribution
is among the best distributions proposed to fit non-Gaussian sea clutter data. Using diffusive models, K distributed sea
clutter can be casted as a Gaussian speckle, with a de-correlation time of 0.1 s, modulated by a Gamma distribution,
with a de-correlation time of about 1 s, characterizing the large scale structures of the sea surface. Our analyses of large
amounts of real sea clutter data suggest that between the time scales for the Gaussian speckle and large scale structures
on the sea surface to de-correlate, sea clutter can be characterized as multifractal 1/f processes. This is the feature that
is not captured by diffusive models and underlies why K distribution cannot fit real sea clutter data sufficiently well.
We surmise that by combining K distribution and associated diffusive models with multifractal formalism, the many
different physical processes underlying sea clutter can be more comprehensively characterized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling sea clutter by chaotic dynamics has been an exciting yet heatedly debated topic. To resolve controversies
associated with this approach, we use the scale-dependent Lyapunov exponent (SDLE) to study sea clutter. The SDLE
has been shown to be able to unambiguously distinguish chaos from noise. Our analyses of almost 400 sea clutter
datasets measured by Professor Simon Haykin suggest that on very short time scales, sea clutter may be classified as
noisy chaos, characterized by a parameter γ, which characterizes the speed of information loss. It is shown that γ can be used to very effectively detect low observable targets within sea clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An envelope of two approximations forms a plane wave coherent laser vibrometry calculation set that indicates
mechanisms of measured and simulated spectral "reduction." Path length differences modulate large spot size
continuous wave laser return leading to structural mode sensing (not a random signal issue). Calculations for
sine swept and multi-modal approximations show vibrating rectangular plates constrained on all edges have
return that varies substantially with low frequency vibration modes, providing modal recognition and ID not
available with 1-D modes (strips or bars).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A true correlation receiver is here. It has been referred to as the interferoceiver, which is able to
measure precisely the Doppler and range for fast moving and remote targets. With the help of an
interferoceiver, Doppler range ambiguity disappears. Doppler and range inaccuracies, intersystem
interference, and unreliable passive identification were problems of Patriot missiles which caused tragic
cases of fratricide during Operation Iraqi Freedom. The interferoceiver will be able to remove the above
problems and significantly reduce fratricide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective missile warning and countermeasures continue to be an unfulfilled goal for the Air Force including the wider military and civilian aerospace community. To make the necessary detection and jamming timeframes dictated by today's proliferated missiles and near-term upgraded threats, sensors with required sensitivity, field of regard, and spatial resolution are being pursued in conjunction with advanced processing techniques allowing for detection and discrimination beyond 10 km. The greatest driver of any missile warning system is detection and correct declaration, in which all targets need to be detected with a high confidence and with very few false alarms. Generally, imaging sensors are limited in their detection capability by the presence of heavy background clutter, sun glints, and inherent sensor noise. Many threat environments include false alarm sources like burning fuels, flares, exploding ordinance, and industrial emitters. Spectral discrimination has been shown to be one of the most effective methods of improving the performance of typical missile warning sensors, particularly for heavy clutter situations. Its utility has been demonstrated in the field and on-board multiple aircraft. Utilization of the background and clutter spectral content, coupled with additional spatial and temporal filtering techniques, have yielded robust adaptive real-time algorithms to increase signal-to-clutter ratios against point targets, and thereby to increase detection range. The algorithm outlined is the result of continued work with reported results against visible missile tactical data. The results are summarized and compared in terms of computational cost expected to be implemented on a real-time field-programmable gate array (FPGA) processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the comparison study of the theoretical and the empirical Cramer-Rao lower bounds (CRLBs) of
wind parameter estimates from a 2-μm wavelength coherent Doppler lidar system called VALIDAR located in NASA
Langley Research Center in Virginia. The statistical behavior of Doppler shift (DS) estimates in particular is of interest.
The estimates are commonly modeled as single-modal Gaussian random variables and this study is based on such
convention. The empirical statistics of DS estimates are estimated from a large amount of sample data in order to obtain
meaningful statistical moments. The impact of the new nonlinear adaptive Doppler-shift estimation technique known as
NADSET is also briefly presented in terms of the statistics of wind parameter estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Though personal navigation systems based on PDA are convenient to each individual, they are not satisfied groups for
some special purpose. Therefore, a real-time geographical information exchange system with PDA has been brought out
in the article. Structure and elements of the system have been described. Finally, an experimental example has been
given, which proved the effectiveness of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an automated target recognition by using scale-invariant feature transform (SIFT) in PowerPC-based
infrared (IR) imaging system. An IR image can be acquired more feature values at night than in the daytime, but
visual image can be acquired more feature values in the daytime. IR-based object recognition puts application into digital
surveillance system because it exist some more feature values at night than in the daytime. Feature of IR image in its
system appears a little feature value in the daytime. It is not comprised within an effective feature values at a visual
image from an IR of the daytime. Proposed method consists of two stages. First, we must localize the interest point in
position and scale of moving objects. Second, we must build a description of the interest point and recognize moving
objects. Proposed method uses SIFT for an effective feature extraction in PowerPC-based IR imaging system. Proposed
SIFT method consists of scale space, extrema detection, orientation assignment, key point description, and feature
matching. SIFT descriptor sets up extensive range about 1.5 times than visual image when feature value of SIFT in IR
image is less than visual image. Because an object in IR image is analogized by field test that it exist more expanse form
than visual image. Therefore, proposed SIFT descriptor is constituted at more expanse term for a precise matching of
object. Based on experimental results, the proposed method is extracted object's feature values in PowerPC-based IR
imaging system, and the result is presented by experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an image fusion for open and unknown environments using normalized mutual information
(NMI) in an infrared (IR) and visual vision system. Image fusion is a field of study of image processing, and it creates a
new image to extract information from various different sensors. And also it gets effective information for a special
object. This can get object types, sensitive characteristic, and information which it not to get characteristic of object from
a single sensor. Image fusion in multi-sensors is two advantages. First, multi-sensor image has inherent redundancy for
each sensor because it can be fused each image from a various multi-band sensor. Second, multi-sensor differs from a
single sensor because it is included information of each sensor and is separated information of object easily in real
environments. Proposed method consists of extraction and comparison of feature point, image registration, and pseudo
color for display. Extraction of feature point is stage which it looks for a similar feature points between each sensor.
Then, the extraction of a similar feature point uses a corner detector. A detected correspondence point from multi-sensor
is compared feature point by using NMI. An acquired image in multi-sensor needs an image registration between two
images. Because it needs transformation from reference image to a coordinated system of sensed image. And this
represents each coordinated system independently between two images. Image registration use transformation of H
matrix. Method for overlay between two images uses blending based on HSV. Based on experimental results, the
proposed method shows high precision for fused pseudo image in multi-sensor, and can be represented image
registration by using probability-based method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an empirical study, observers gave ratings of their ability to detect a military target in filtered images of natural
scenes. The purpose of the study was twofold. First, the absolute value of the convolution images generated with
oriented Gabor filters of different scales and orientations, and pairs of filters (corner filters), provided brightness images
which were evaluated as saliency maps of potential target locations. The generation of the saliency maps with oriented
Gabor filters was modeled after the second-order processing of texture in the visual system. Second, two methods of
presentation of the saliency maps were compared. With the flicker presentation method, a saliency map was flickered on
and off at a 2-Hz rate and superimposed upon the image of the original scene. The flicker presentation method was
designed to take advantage of the known properties of the magnocellular pathway of the visual system. A second
method (toggle presentation) used simply for comparison, required observers to switch back and forth between the
saliency image and the image of the original scene. Primary results were that (1) saliency images produced with corner
filters were rated higher than those produced with simple Gabor filters, and (2) ratings obtained with the flicker method
were higher than those obtained with the toggle method, with the greatest advantage for filters tuned to lower spatial
frequencies. The second result suggests that the flicker presentation method holds considerable promise as a new
technique for combining information (dynamic image fusion) from two or more independently obtained (e.g., multi-spectral)
or processed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-spectral image portrayal using several sensors is a revolutionary way to increase the amount of useful visual
information to the end user. However, for maximum usability, the information from multiple sensors must be fused into
a single image that can be understood. The decisions about which sensors are delivering the most important information
for a given viewing situation and what manipulations should be done to the acquired data are complex. To better
examine this complexity, information was obtained from aviators about which visual tasks are deemed to be most
important. This information was gathered from discussions with pilots and other aircrew members as well as from
relevant publications. The important visual task information was then used to develop a matrix that included specific
visual aspects of the task (e.g., detection or identification). The matrix also included other parameters that could affect
or alter the ability to "see" the desired target or perform the task. These other parameters include ambient lighting,
environmental conditions (e.g., clear or hazy atmospheres), man-made impediments to vision (camouflage or smoke),
and which image enhancing algorithms should be applied (e.g., contrast enhancement or noise reduction). This top-down
evaluation was then used to determine which image enhancement algorithms are most important and which will be
employed most often for the identified visual tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensory data usually present complimentary information such as visual-band imagery and infrared imagery. There
is strong evidence that the fused multisensor imagery increases the reliability of interpretation, and the colorized
multisensor imagery improves observer performance and reaction times. In this paper, we propose an optimized joint
approach of image fusion and colorization in order to synthesize and enhance multisensor imagery such that the resulting
imagery can be automatically analyzed by computers (for target recognition) and easily interpreted by human users (for
visual analysis). The proposed joint approach provides two sets of synthesized images, a fused image in grayscale and a
colorized image in color using a fusion procedure and a colorization procedure, respectively. The proposed image fusion
procedure is based on the advanced discrete wavelet (aDWT) transform. The fused image quality (IQ) can be further
optimized with respect to an IQ metric by implementing an iterative aDWT procedure. On the other hand, the daylight
coloring technique renders the multisensor imagery with natural colors, which human users are use to observing in
everyday life. We hereby propose to locally colorize the multisensor imagery segment by mapping the color statistics of
the multisensor imagery to that of the daylight images, with which the colorized images resemble daylight pictures. This
local coloring procedure also involves histogram analysis, image segmentation, and pattern recognition. The joint fusion
and colorization approach can be performed automatically and adaptively regardless of the image contents. Experimental
results with multisensor imagery showed that the fused image is informative and clear, and the colored image appears
realistic and natural. We anticipate that this optimized joint approach for multisensor imagery will help improve target
recognition and visual analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode
sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively
Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides
streamlined inter-processor connections with deterministically high performance. Software programmability, scalability,
ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant
advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods
of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming
model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They
exchange data and control through a network of self-synchronizing channels. A common application design pattern on
this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically
configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and
decompression, network processing, and graphics applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.