Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation–maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.
In this paper, we propose a real-time human versus animal classification technique using a pyro-electric sensor array and Hidden Markov Model. The technique starts with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, we convert the segmented object to a signal by considering column-wise pixel values and then finding the wavelet coefficients of the signal. HMMs are trained to statistically model the wavelet features of individuals through an expectation-maximization learning process. Human versus animal classifications are made by evaluating a set of new wavelet feature data against the trained HMMs using the maximum-likelihood criterion. Human and animal data acquired-using a pyro-electric sensor in different terrains are used for performance evaluation of the algorithms. Failures of the computationally effective SURF feature based approach that we develop in our previous research are because of distorted images produced when the object runs very fast or if the temperature difference between target and background is not sufficient to accurately profile the object. We show that wavelet based HMMs work well for handling some of the distorted profiles in the data set. Further, HMM achieves improved classification rate over the SURF algorithm with almost the same computational time.
This paper presents a proof of concept sensor system based on a linear array of pyroelectric detectors for recognition of moving objects. The utility of this prototype sensor is demonstrated by its use in trail monitoring and perimeter protection applications for classifying humans against animals with object motion transverse to the field of view of the sensor array. Data acquisition using the system was performed under varied terrains and using a wide variety of animals and humans. With the objective of eventually porting the algorithms onto a low resource computational platform, simple signal processing, feature extraction, and classification techniques are used. The object recognition algorithm uses a combination of geometrical and texture features to provide limited insensitivity to range and speed. Analysis of system performance shows its effectiveness in discriminating humans and animals with high classification accuracy.
Classification of human and animal targets imaged by a linear pyroelectic array senor presents some unique challenges especially in target segmentation and feature extraction. In this paper, we apply two approaches to address this problem. Both techniques start with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, in the first technique, we extract features such as texture, invariant moments, edge, shape information, and spectral contents of the segmented object. These features are fed to classifiers including Naïve Bayesian (NB), and Support Vector Machine (SVM) for human against animal classification. In the second technique, the speeded up robust feature (SURF) extraction algorithm is applied to the segmented objects. A code book technique is used to classify objects based on SURF features. Human and animal data acquired-using the pyroelectric sensor in different terrains, are used for performance evaluation of the algorithms. The evaluation indicates that the features extracted in the first technique in conjunction with the NB classifier provide the highest classification rates. While the SURF feature plus code book approach provides a slightly lower classification rate, it provides better computational efficiency lending itself to real time implementation.
Profiling sensor systems have been shown to be effective for detecting and classifying humans against animals. A
profiling sensor with a 360 horizontal field of view was used to generate profiles of humans and animals for
classification. The sensor system contains a long wave infrared camera focused on a smooth conical mirror to
provide a 360 degree field of view. Human and animal targets were detected at 30 meters and an approximate height
to width ratio was extracted for each target. Targets were tracked for multiple frames in order to segment targets
from background. The average height to width ratio was used as a single feature for classification. The Mahalanobis
distance was calculated for each target in the single feature space to provide classification results.
A profiling sensor has been realized using a vertical column of sparse detectors with the sensor's optical axis configured
perpendicular to the plane of the vertical column of detectors. Traditionally, detectors of the profiling sensor are placed
in a sparse vertical column configuration. A subset of the detectors may be removed from the vertical column and placed
at arbitrary locations along the anticipated path of the objects of interest, forming a custom detector array configuration.
Objects passing through the profiling sensor's field of view have traditionally been classified via algorithms processed
off-line. However, reconstruction of the object profile is impossible unless the detectors are placed at a known location
relative to each other. Measuring these detector locations relative to each other can be particularly time consuming,
making this process impractical for custom detector configuration in the field. This paper describes a method that can be
used to determine a detector's relative location to other detectors by passing a known profile through the sensor's field of
view as part of the configuration process. Real-time classification results produced by the embedded controller for a
variety of objects of interest are also described in the paper.
KEYWORDS: Signal to noise ratio, Sensors, Signal detection, Interference (communication), Clouds, Particles, Long wavelength infrared, Terbium, Signal attenuation, Target detection
The physical model for long wave infrared (LWIR) thermal imaging through a dust obscurant incorporates
transmission loss as well as an additive path radiance term, both of which are dependent on an obscurant
density along the imaging path. When the obscurant density varies in time and space, the desired signal
is degraded by two anti-correlated atmospheric noise components-the transmission (multiplicative) and the
path radiance (additive)-which are not accounted for by a single transmission parameter. This research
introduces an approach to modeling the performance impact of dust obscurant variations. Effective noise
terms are derived for obscurant variations detected by a sensor via a forward radiometric analysis of the
imaging context. The noise parameters derived here provide a straightforward approach to predicting imager
performance with existing NVESD models such as NVThermIP.
Pyroelectric linear arrays can be used to generate profiles of targets. Simulations have shown that generated profiles can
be used to classify human and animal targets. A pyroelectric array system was used to collect data and classify targets as
either human or non-human in real time. The pyroelectric array system consists of a 128-element Dias 128LTI
pyroelectric linear array, an F/0.86 germanium lens, and an 18F4550 pic microcontroller for A/D conversion and
communication. The classifier used for object recognition was trained using data collected in petting zoos and tested
using data collected at the US-Mexico border in Arizona.
This paper describes the development of linear pyroelectric array systems for classification of human, animal, and
vehicle targets. The pyroelectric array is simulated to produce binary profiles of targets. The profiles are classified based
on height to width ratio using Naïve Bayesian classifiers. Profile widths of targets can vary due to the speed of the target.
Target speeds were calculated using two techniques; two array columns, and a tilted array. The profile width was
modified by the calculated speeds to show an improvement in classification results.
This paper presents object profile classification results using range and speed independent features from an infrared
profiling sensor. The passive infrared profiling sensor was simulated using a LWIR camera. Field data collected near the
US-Mexico border to yield profiles of humans and animals is reported. Range and speed independent features based on
height and width of the objects were extracted from profiles. The profile features were then used to train and test three
classification algorithms to classify objects as humans or animals. The performance of Naïve Bayesian (NB), K-Nearest
Neighbors (K-NN), and Support Vector Machines (SVM) are compared based on their classification accuracy. Results
indicate that for our data set all three algorithms achieve classification rates of over 98%. The field data is also used to
validate our prior data collections from more controlled environments.
This paper provides a feasibility analysis and details of implementing a classification algorithm on an embedded
controller for use with a profiling sensor. Such a profiling sensor has been shown to be a feasible approach to a low-cost
persistent surveillance sensor for classifying moving objects such as humans, animals, or vehicles. The sensor produces
data that can be used to generate object profiles as crude images or silhouettes, and/or the data can be subsequently
automatically classified. This paper provides a feasibility analysis of a classification algorithm implemented on an
embedded controller, which is packaged with a prototype version of a profiling sensor. Implementation of the embedded
controller is a necessary extension of previous work for fielded profiling sensors and their appropriate applications.
Field data is used to confirm accurate automated classification.
This paper presents initial object profile classification results using range and elevation independent features from a
simulated infrared profiling sensor. The passive infrared profiling sensor was simulated using a LWIR camera. A field
data collection effort to yield profiles of humans and animals is reported. Range and elevation independent features
based on height and width of the objects were extracted from profiles. The profile features were then used to train and
test four classification algorithms to classify objects as humans or animals. The performance of Naïve Bayesian (NB),
Naïve Bayesian with Linear Discriminant Analysis (LDA+NB), K-Nearest Neighbors (K-NN), and Support Vector
Machines (SVM) are compared based on their classification accuracy. Results indicate that for our data set SVM and
(LDA+NB) are capable of providing classification rates as high as 98.5%. For perimeter security applications where
misclassification of humans as animals (true negatives) needs to be avoided, SVM and NB provide true negative rates of
0% while maintaining overall classification rates of over 95%.
This paper presents progress in image fusion modeling. One fusion quality metric based on the Targeting Task
performance (TTP) metric and another based on entropy are presented. A human perception test was performed with
fused imagery to determine effectiveness of the metrics in predicting image fusion quality. Both fusion metrics first
establish which of two source images is ideal in a particular spatial frequency pass band. The fused output of a given
algorithm is then measured against this ideal in each pass band. The entropy based fusion quality metric (E-FQM) uses
statistical information (entropy) from the images while the Targeting Task Performance fusion quality metric (TTPFQM)
utilizes the TTP metric value in each spatial frequency band. This TTP metric value is the measure of available
excess contrast determined by the Contrast Threshold Function (CTF) of the source system and the target contrast. The
paper also proposes an image fusion algorithm that chooses source image contributions using a quality measure similar
to the TTP-FQM. To test the effectiveness of TTP-FQM and E-FQM in predicting human image quality preferences,
SWIR and LWIR imagery of tanks were fused using four different algorithms. A paired comparison test was performed
with both source and fused imagery as stimuli. Eleven observers were asked to select which image enabled them to
better identify the target. Over the ensemble of test images, the experiment showed that both TTP-FQM and E-FQM
were capable of identifying the fusion algorithms most and least preferred by human observers. Analysis also showed
that the performance of the TTP-FQM and E-FQM in identifying human image preferences are better than existing
fusion quality metrics such as the Weighted Fusion Quality Index and Mutual Information.
Human target identification performance based on target silhouettes is measured and compared to that of complete targets. The target silhouette identification performance of automated region based and contour based shape identification algorithms are also compared. The region based algorithms of interest are Zernike Moment Descriptor (ZMD), Geometric Moment Descriptor (GMD), and Grid Descriptor (GD) while the contour based algorithms considered are Fourier Descriptor (FD), Multiscale Fourier Descriptor (MFD), and Curvature Scale Space Descriptor (CS). The results from the human perception experiments indicate that at high levels of degradation, human identification of target based on silhouettes is better than that of complete targets. The shape recognition algorithm comparison shows that GD performs best, very closely followed by ZMD. In general region based shape algorithms perform better that contour based shape algorithms.
Spectral-spatial independent component analysis (ICA) basis functions of visible color images are similar to some processing elements in the human visual systems in that they resemble Gabor filters and show color opponencies. In this research we studied combined spectral-spatial ICA basis functions of multispectral mid wave infrared (MWIR) images. These ICA spectral-spatial basis functions were then used as filters to extract features from multispectral MWIR images for classification. The images were captured in the 3.0–5.0 µm, 3.7–4.2 µm, and 4.0–4.5 µm bands using a multispectral MWIR camera. In the proposed algorithm, phase relationships between the basis functions indicate how the extracted features from the different spectral band images can be combined. We used classification performance to compare features obtained by filtering using multispectral ICA basis functions, multispectral principal component analysis basis functions, and Gabor filters.
It is known that spectral-spatial ICA basis functions of visible color images are similar to some processing elements in
the human visual systems in that they resemble Gabor filters and show color opponencies. In this research we study
combined spectral-spatial ICA basis functions of multispectral MWIR images. These ICA spectral-spatial basis
functions are then used as filters to extract features from multispectral MWIR images. It is hypothesized that learning
the added dimension of spectral information along with spatial characteristics of basis functions using ICA improves
classification performance for multispectral MWIR images. The images are captured in the 3.0 - 5.0um, 3.7 - 4.2um
and 4.0 - 4.5um bands using a multispectral MWIR camera. The phase relationship between the basis functions indicate
how the extracted features from the different spectral band images can be combined. We use classification performance
to compare features obtained by filtering using multispectral ICA basis functions, multispectral PCA basis functions and
opponent Gabor filters.
This study determines the effectiveness of a number of image fusion algorithms through the use of the following image metrics: mutual information, fusion quality index, weighted fusion quality index, edge-dependent fusion quality index and Mannos-Sakrison’s filter. The results obtained from this study provide objective comparisons between the algorithms. It is postulated that multi-spectral sensors enhance the probability of target discrimination through the additional information available from the multiple bands. The results indicate that more information is present in the fused image than either single band image. The image quality metrics quantify the benefits of fusion of MWIR and LWIR imagery.
This paper describes the use of a rotating test pattern or reticle to measure the Modulation Transfer Function (MTF) of a staring array sensor. The method finds the Edge Spread Function (ESF) from which the MTF can be calculated. The rotating reticle method of finding the ESF of a sensor has several advantages over the static tilted edge method. The need for precise edge alignment is removed. Motion blur is used to simultaneously average out the effect of undersampling and to oversample the edge. The improved oversampling allows reduction of the noise in the generated ESF while keeping a high resolution. A unique data readout technique reads edge data perpendicular to the edge. Perpendicular readout eliminates the need to know or estimate the slope of the tilted edge. This MTF measurement method is validated using simulation and actual data captured by a digital camera. The resulting ESF plots agree well with expected results.
IEEE 802.11g WLANs operating at 2.4 GHz face interference from Bluetooth which also uses the same frequency. IEEE 802.11g is an orthogonal frequency divison multiplexing (OFDM) based WLAN standard. Adaptive subcarrier selection (ASuS) involves using feedback from the receiver to dynamically allocate subcarriers for OFDM transmission. This paper proposes a method to avoid Bluetooth interference using ASuS. By adaptively choosing subcarriers for OFDM transmission, the frequencies used by Bluetooth can be avoided. Power level deviations in small groups of contiguous subcarriers with respect to other subcarriers in an OFDM symbol can be used as an indication of Bluetooth interference. Simulations show that as compared to the conventional OFDM technique, adaptive subcarrier selection results in significant reduction in the packet error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.