PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
SUBSTANCE IDENTIFICATION TECHNOLOGIES | 3-7 OCTOBER 1993
Substance Identification Analytics
Editor(s): James L. Flanagan, Richard J. Mammone, Albert E. Brandenstein, Edward Roy Pike M.D., Stelios C. A. Thomopoulos, Marie-Paule Boyer, H. K. Huang, Osman M. Ratib
James L. Flanagan, Richard J. Mammone, Albert E. Brandenstein, Edward Roy Pike M.D., Stelios C. A. Thomopoulos, Marie-Paule Boyer, H. K. Huang, Osman M. Ratib
A novel method of segmenting un-transcribed speech into sub-word acoustic units using neural tree networks (NTN) is presented. A NTN is a hierarchical classifier that combines the properties of decision trees and feed-forward neural networks. Segmentation of speech into sub-word units (of acoustic significance) is vital for large vocabulary, continuous speech recognition on un-transcribed speech database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speaker identification and word spotting will shortly play a key role in a lot of different fields. This paper presents an approach, based on the wavelet transform, to extract features from a speech signal. These features are based on the `modulation model'. An adequate choice of the extracted features dramatically increases the efficiency of the classification performed on the different speakers or on the different words.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An evaluation of various classifiers for text-independent speaker recognition is presented. In addition, a new classifier is examined for this application. The new classifier is called the modified neural tree network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the application of new algorithms to the adaptive language acquisition model formulated by Gorin. The new methods consists of incremental approaches for the algebraic learning of statistical associations proposed by Tishby. The incremental methods are evaluated on a text-based natural language experiment, namely the inward call manager task. Performance is evaluated with respect to the alternative methods, namely the smooth mutual information method and the pseudo-inverse solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speech recognition by machine has finally come of age in a practical sense. A major problem in speech recognition, however, stems from the large variance of different utterances for the same word. This paper proposes an efficient method of achieving high accuracy speaker- independent isolated-word recognition through the implementation of associative memories and neural networks. The basic architecture of such a process involves two-stages: speech analysis and recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A face recognition system has been developed and demonstrated at the Rutgers University Center for Computer Aids for Industrial Productivity. The system uses a preliminary data reduction step. gray scale projections, and a fast transform technique to greatly reduce the computational complexity of the problem and, consequently, the cost of high-speed implementation. The decision function is a few, extremely cost-effective neural network, the Mammone/Sankar Neural Tree Network. This network can be trained and re-trained rapidly on face image data and the system has built-in facilities for acquiring and editing a large data base of face images. Recognition rates higher than 90% were achieved on data sets containing up to 269 subjects. More importantly, it performed well on subjects with and without their glasses, under a wide range of changes in facial expressions, and under a variety of small tilts, translations and rotations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an extension of the neural tree network (NTN) architecture to let it solve multi- class classification problems with only binary fan-out. We then demonstrate it's effectiveness by applying it in a method for image segmentation. Each node of the NTN is a multi-layer perceptron and has one output for each segment class. These outputs are treated as probabilities to compute a confidence value for the segmentation of that pixel. Segmentation results with high confidence values are deemed to be correct and not processed further, while those with moderate and low confidence values are deemed to be outliers by this node and passed down the tree to children nodes. These tend to be pixels in boundary of different regions. We have used a realistic case study of segmenting the pole, coil and painted coil regions of light bulb filaments (LBF). The input to the network is a set of maximum, minimum and average of intensities in radial slices of a circular window around a pixel, taken from a front-lit and a back-lit image of an LBF. Training is done with a composite image drawn from images of many LBFs. The results are favorably compared with a traditional segmentation technique applied to the LBF test case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automation is the driving force behind many technological advances. One of the major areas of automation research is machine vision. Machine vision centers on object recognition as a means of perceiving a real world environment. Addressing machine vision issues through the application of neural networks is the focus of much contemporary research. Neural networks model the computational units of the human brain, process information in parallel, and as such are extremely well suited for emulation of the human perception process. Thus a neural network approach is presented as a solution to the 3D object recognition problem. Specifically, a hybrid Hopfield network (HHN) previously used to solve 2D occluded object recognition problems is adapted to the 3D object recognition problem. Local and relational features are proposed for use in a HHN graph matching algorithm. Finally, 3D single and multiple input object recognition is realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel algorithm is proposed in this paper, which builds and then shrinks a three-layer feed- forward neural network to achieve arbitrary classification in the n-dimensional Euclidean space. The algorithm offers guaranteed convergence and a 100% correct classification rate on training patterns, as well as an explicit generalization rule for predicting how a trained network generalizes to patterns that did not appear in training. Moreover, this generalization rule is continuously adjustable from an equal-angle measure to an equal-distance measure via a single reference number to allow adaptation of performance for different requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks and image pyramids share many similarities, as we have shown in previous papers. In this paper we explore the usage of neural network learning algorithms for image pyramids. In particular, learning algorithms for principal component extraction have some interesting properties for pyramids. These algorithms are consistent with Linskers principle of maximum information preservation. We will review several algorithms for principal component extraction and show how they can be used in regular, gray-level pyramids. The usage of constraint autoassociative back-propagation networks yields a new type of pyramid, where not all cells perform the same reduction function. Several applications for this new type of pyramid are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of security technology ranges from production control over supervision of special areas or objects to pattern recognition. In a lot of cases the security system deals as a preprocessor and its output should help the human visual system to detect important information. The output of hardcopy devices like printers or fax-machines is often restricted to quantized levels, so that a quantization process has to be executed. We present several attempts to perform this by the use of neural structures. The ability of layer networks and their learning algorithms lead to feedback networks. Our examination analyses the relationship between the theory of the feedback networks (especially the Hopfield net and the bidirectional associative memory net) and the iterative algorithms used in digital halftoning. This analysis allows a better understanding of the methods for digital halftoning and shows how they can benefit from each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present new procedures for preprocessing video images for automatic lipreading applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Explosive Detection and Other Applications of Neural Networks
Detection of explosive materials from X-ray diffraction spectra makes use of the fact that different crystalline materials exhibit characteristic diffraction patterns composed of peaks at different energy locations. The position of the peaks in the spectra are (ideally) invariant for a given material, as are the relative heights of the peak, though to a lesser degree. However, the presence of absorbing materials may alter the measured heights of the peaks, or even eliminate certain peaks altogether. Furthermore, lower signal-to-noise ratios in the spectra, due to short exposure/scanning times, lead to further distortion of the spectra. In this paper we present a feature set which offers some degree of robustness in the presence of such distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique is presented for automatic generation of Analog Behavioral circuit models using feed-forward neural networks in static and dynamic configurations. These models are generated, by using the data output from an accurate SPICE simulation to train a neural network to model a particular circuit function. Results are given using two types of neural networks, a static neural network to model an analog multiplier, and a recurrent neural network for modeling the dynamics of a bandlimited circuit. Simulations show that neural networks are able to learn the essential nonlinear and dynamic properties found in these circuits using the training technique described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D active vision system illustrated in this paper has been developed for a geometrical study of object surfaces. Indeed, all our work has been directed towards the geometrical exploitation of the calculated 3D data, and hence the proposed methods are based on the curve idea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of n-dimensional attributed parallel array systems is introduced and shown to be a useful tool for the formal description of the static as well as the dynamic characteristics of neural networks. Because of the underlying grid structure. Kohonen's model of self-organizing feature maps is especially well suited for being represented by n-dimensional attributed parallel array systems. Using our formal description model we prove that Kohonen's global algorithm for the adaption of the weights of the neurons in a fully connected network can be simulated in a network with locally bounded connections, which can be represented by an n- dimensional attributed parallel array system containing only parallel array productions with a bounded neighborhood. These results show that our model of n-dimensional attributed parallel array systems can be used as a specification language for various models of neural networks and as a formal tool for proving specific characteristic features of these networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports upon improvements and extensions of rule-based expert systems and related technologies in the context of their application to the cargo container screening problem. These innovations have been incorporated into a system built for and deployed by U.S. Customs with funding provided by the DCI's Counter Narcotics Committee. Given the serious nature of the drug smuggling threat and the low probability of intercept, the ability to target the extremely limited inspectional resources available to U.S. Customs is a prerequisite for success in fighting the `Drug War.'
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural network calculations are compared to conventional probability calculations for decision making in an explosives detection system. The explosives detection system, which has been tested in the laboratory, is a pulsed fast neutron spectrometer that measures the attenuation of neutrons for a particular suitcase loading. The attenuation curves along with the measured total cross sections are used to determine the amount of hydrogen, carbon, nitrogen and oxygen in volume increments through the suitcase. This information is used to determine a probability of detection (of explosives) versus a probability of false alarm curve. The same information is used in a neural network program to determine its effectiveness in predicting the presence of explosives. The neural network was trained from computer generated data and tested on extrapolated laboratory data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is intended to describe the approach to data processing used on a prototype portable explosive detector/analyzer based on gas chromatography. The emphasis is on achieving more reliable analysis without intervention from an expert user. Neural networks have been used to overcome some of the problems encountered using conventional data processing algorithms. The aim has been to produce a system which embodies expert-like knowledge so that sophisticated judgments may be made when the system is operated by staff with minimal training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast neutron attenuation system that uses a white neutron source and pulsed fast neutron spectroscopy has been used along with neutron tomography to obtain the hydrogen, carbon, nitrogen, and oxygen (H, C, N and O) concentrations at specified cuts through sealed containers. A 3D plot of the H, C, N and O number density can be obtained throughout the volume by combining a number of cuts. Three independent ratios C/O, N/O and H/C are obtained from the four number densities and used in a neural network program to predict the presence of explosives and drugs in sealed containers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the recurrent back-propagation and Newton algorithms for an important class of recurrent networks and their convergence properties are discussed. To ensure proper convergence behavior, recurrent connections must be suitably constrained during the learning process. Simulation results demonstrate that the algorithms with the suggested constraint have superior performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new model for focusing attention in hierarchical structured neural networks. Emphasis is devoted to determine the location of the focus of attention. The main idea is that attention is closely coupled with predictions about the environment. Whenever there is a mismatch between prediction and reality a shift of attention is performed. This mismatch can also be used to change (learn) the prediction and processing mechanism, so that the prediction will be better next time. In this sense attention and learning are closely coupled. We present a first application of this mechanism to classification of satellite image (Landsat TM) data. The usage of the attentional mechanism can reduce the processing time by 50% while maintaining the classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The back-propagation network (BPN) has a minimum of one hidden layer of processing elements between the input and output layers. The addition of a hidden layer, or layers, along with the generalized delta rule, are responsible for the BPN exceeding the linear restrictiveness of the earlier Perceptron. Although the major significance of the hidden layer has been well-established, there is no general agreement on a method for determining the number of hidden elements to use for a given data set. One avenue to increasing the window of choosing the optimal number of elements in the hidden layer is to better understand how the number of hidden elements contributed to decision accuracy. In the present research, a single hidden layer BPN was trained using Anderson's classic IRIS data set and tested with a 10-fold validation method across separate studies. While holding all BPN parameters constant, 19 separate tests were conducted beginning with two hidden elements and increasing to 20 hidden elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improving evaluation, especially with small sample, or small-n, applications, may be highly dependent on incorporating expanding knowledge about methodological pitfalls to avoid. It is the intent of the current paper to provide an informational guide to key evaluation issues with small-n. Although the present paper is focused on supervised learning classification paradigms typified by the back-propagation network, the principles hold true in various degrees for other artificial neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective non-intrusive substance detection techniques require two basic qualities: ability to detect and recognize as many signatures of the relevant chemical elements as possible and to do it on as small as possible volume of the interrogated object. Some nuclear-based inspection techniques possess these unique qualities. These abilities open new possibilities in rapid and automated inspection of small and large objects, from airline passenger luggage to shipping containers, for explosives, illicit drugs, other hazardous material including nuclear and most dutiable items. Various imaging techniques and their signatures are reviewed. Techniques using emission tomography of induced gamma rays, such as thermal neutron analysis and fast neutron analysis as well as those using the time-of-flight technique to provide direct substance images, such as pulsed fast neutron analysis are described. Some images employing these techniques are shown and briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and demonstrated a microwave technique for detecting high explosives, illegal drugs, and other chemical contraband in checked airline baggage. Our technique isolates suspicious materials using microwave tomography and identifies chemical contraband using microwave spectroscopy. Measurements in the frequency range 2 - 18 GHz indicate that microwave energy will penetrate nonmetallic suitcases and that contraband materials feature distinct dielectric spectra at these wavelengths. We have also formed microwave images of a soft-sided suitcase and its contents. After manually segmenting the microwave imagery, we successfully identified chemical simulants for both high explosives and illegal drugs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a signal processing algorithm to analyze the signals obtained by an OMA system for laser-produced plasmas. This signal processing program is applied for multi- component analysis of trace elements in particulate materials (e.g. soils and industrial wastes) and is designed to overcome signal fluctuations due to instability of the plasma characteristics and due to some of the matrix effects. The program involves a constrained normalization algorithm, an automatic peak assignment, a functional fit of all peaks of interest and their surroundings, and a principal components regression calibration model. These algorithms, together with experimental optimizations, are shown to solve most of the problems present in laser plasma analysis of particulate material and to produce detection limits in the ppm range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Mobile Spectroscopic Analysis System is a suite of three compact analytical instruments currently under development. It is designed to provide rapid, full-spectrum analysis of samples in the field with little or no sample perceptron. The instruments will be linked to a common sample-handling system and controlled by a single workstation, which will combine information from all three instruments to improve the detection performance of the system and resolve ambiguities. The instruments include: (1) laser time-of-flight atomic mass spectrometer (elemental and isotopic analysis), (2) laser time-of-flight molecular mass spectrometer (molecular analysis), and (3) laser heated cavity atomic emission spectrometer (elemental analysis). The instruments provide overlapping detection capability for many substances of interest. It is expected that the use of data from multiple instruments will improve the reliability of the instruments and reduce the false-positive rate significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays the method for the recognition of partially occluded objects has been needed increasingly. It can be used for airport security such as baggage inspection. Basically algorithm for airport security problem should be fast and exact to get solutions. That is, it should get global optimum as fast as possible. This is why we seek for Annealed Hopfield Network (AHN). Even if AHN is slower than Hybrid Hopfield Network (HHN), AHN provides nearly global solutions without initial restrictions and leads false matching less than HHN. Conclusively it is turned out that AHN is robust to identify occluded target objects with large tolerance of their features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new semi-coherent quadratic eigenimage based technique for detecting stationary targets in SAR data. The new detector is different from previous work because it models the SAR signal as a multi-pixel, multi-band complex random signal with unknown spatial position and orientation. The new proposed detector handles the unknown orientation by modeling each target with a set of angle subclasses. The proposed detector has reduced complexity by using reduced rank techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper represents the results of the intermediate stage of a Ph.D. program exploring an alternative vision system based on an image sensor which would ideally match the characteristics of visual data by the use of spatially adaptive homogeneous low level processing techniques. The work is based on the empirical analysis of simulations of simple analogue resistive networks driven by a linear array of photoreceptors. The feasibility of a homogeneous spatially adaptive space modulated filter is explored and the results of investigations into a dynamic closed loop system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pure nuclear quadrupole resonance (NQR) of 14N nuclei is quite promising as a method for detecting explosives such as RDX and contraband narcotics such as cocaine and heroin in quantities of interest. Pure NQR is conducted without an external applied magnetic field, so potential concerns about damage to magnetically encoded data or exposure of personnel to large magnetic fields are not relevant. Because NQR frequencies of different compounds are quite distinct, we do not encounter false alarms from the NQR signals of other benign materials. We have constructed a proof-of-concept NQR explosives detector which interrogates a volume of 300 liters (10 ft3). With minimal modification to the existing explosives detector, we can detect operationally relevant quantities of (free base) cocaine within the 300-liter inspection volume in 6 seconds. We are presently extending this approach to the detection of heroin base and also examining 14N and 35,37Cl pure NQR for detection of the hydrochloride forms of both materials. An adaptation of this NQR approach may be suitable for scanning personnel for externally carried contraband and explosives. We first outline the basics of the NQR approach, highlighting strengths and weaknesses, and then present representative results for RDX and cocaine detection. We also present a partial compendium of relevant NQR parameters measured for some materials of interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multidimensional Processing for Wide-area Surveillance
One-dimensional (1D) and 2D sensor signals returned from the mass spectrometers, x-ray spectral analyzers, CT scanners, or vapor detectors present a major challenge for the detection and identification of illegal substances. Prompt and accurate identification and classification of detected signatures demands extremely high computation power and requires sophisticated signal processing. The paper presents the development of a real-time multispectral analysis system that performs high-speed neural operations and sensor fusion for feature extraction and trace identification. The system utilizes the technology of large-scale holographic optical neural networks being developed and demonstrated by Physical Optics Corporation (POC). This technology is based on fully parallel optical processing of spectral information to produce parallel spectral pattern recognition. In addition, POC's processing algorithm has demonstrated the ability to extract spectral information from extremely noisy backgrounds. This translates into very high instrument sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to investigate the influence of modern technology on the world climate it is important to have automatic detection methods for man-induced parameters. In this case the influence of jet contrails on the greenhouse effect shall be investigated by means of images from polar orbiting satellites. Current methods of line recognition and amplification cannot distinguish between contrails and rather sharp edges of natural cirrus or noise. They still rely on human control. Through the combination of different methods from cloud physics, image comparison, pattern recognition, and artificial intelligence we try to overcome this handicap. Here we will present the basic methods applied to each image frame, and list preliminary results derived this way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of large quantities of synthetic aperture radar data in real time is a complex problem. Even the image formation process taxes today's most advanced computers. The use of complex algorithms with multiple channels adds another dimension to the computational problem. Advanced Research Projects Agency (ARPA) is currently planning on using the Paragon parallel processor for this task. The Paragon is small enough to allow its use in a sensor aircraft. Candidate algorithms will be implemented on the Paragon for evaluation for real time processing. In this paper ARPA technology developments for detecting targets hidden in foliage are reviewed and examples of signal processing techniques on field collected data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-spectral IR, coupled with advanced image processing procedures, offers the possibility of wide area surveillance and the detection of low concentrations of chemical vapors. A field experiment was performed using the ARPA Multi-Spectral IR Camera sensor mounted on an aircraft. The sensor aircraft flew over a controlled diethyl ether release in a tropical rain forest acquiring image data from both 10000 ft and 22000 ft altitude. The data was processed using multi-spectral algorithms and the vapor was detected over both an open area and the rain forest canopy. This detection was made possible by the removal of most of the background scene by multi-spectral processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface reconstruction is a technique that is used for the interpolation of object information between contours. The majority of work done in the area of surface reconstruction has dealt primarily with medical image contours. Surface reconstruction has also been used to reduce the memory requirements in automobile and ship designs. However, this technology could be used for other types of applications. For instance, it could be used in airport security. A x-ray machine could be used to sample a suitcase along a particular axis and rotation. 2D objects inside the x-ray images could be extracted and a 3D object reconstructed from the extracted objects. This type of application requires a fast solution that takes into account shape information. In addition, the application must not require human interface and produce recognizable objects. This paper presents a surface reconstruction method that meets the above requirements for single parallel contour extracted from x-ray luggage images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution we investigate the performance of steerable functions to characterize keypoints. Steerable functions were introduced recently by Perona as an efficient method to calculate the response of a filter in a continuum of orientations, scales, and other parameters. For the analysis of points with events at multiple orientations, functions with a high orientational resolution are needed. We discuss criteria to judge the quality of a function to serve for orientation analysis. To handle line as well as edge junctions, we use a complex function with the real and imaginary part approximately in quadrature. An associated one- sided function allows to distinguish between terminating and nonterminating edges and lines. To analyze thick lines and blurred edges the function is also steered in scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is often desirable to draw a detailed and realistic representation of surface data on a computer graphics display. One such representation is a 3D shaded surface. Conventional techniques for rendering shaded surfaces are slow, however, and require substantial computational power. Furthermore, many techniques suffer from aliasing effects, which appear as jagged lines and edges. This paper describes an algorithm for the fast rendering of shaded surfaces without aliasing effects. It is much faster than conventional ray tracing and polygon-based rendering techniques and is suitable for interactive use. On an IBM RISC System/6000TM workstation it renders a 1000 X 1000 surface in about 7 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of modem, low-dose X-ray imaging systems for threat and contraband detection dates back more than twenty years. These X-ray systems generally fall into two classes: those that use a fan beam of X-rays and a linear array of detectors to create an image via a line-by-line sampling technique; and those that use a mechanically or electronically generated sweeping pencil beam (a "flying-spot") of X-rays intercepted by a single, elongated detector to create an image as a time vs. position raster scan. Both techniques have found application in systems operating at a beam energy of 160 key or less, designed for the inspection of small parcels or baggage, hand-carried luggage, palletized cargo, and the like. Other systems use linear accelerators to produce more penetrating beams in the 2 -MeVrange, for the inspection of large cargo containers and trucks; these sources are suitable only for the fan beam/detector array method. The flying-spot technique makes possible the generation of images from both transmitted and scattered X-rays. For the latter, broad-area detectors are positioned to intercept radiation that has scattered from the inspected object, and an image is generated from this detected signal on the basis of the same time vs. position raster scan that is used for the transmission image. This ability to generate images from scattered radiation has several unique advantages: (1) it is "one— sided" - i.e., an image can be Obtained even if the object is not accessible from its far side or is too thick to penetrate; (2) because the scatter signal falls of quite rapidly with increasing depth into the object, backscatter images effectively represent a "slice" of the object characteristic of its side nearest the source; this image may be useful even when a transmission image representing the same scanned area is hopelessly confused by image clutter; (3) the underlying physical phenomenon that leads to scattered radiation is the Compton effect, which is a function not only of the density of the scattering material, but also of its atomic number; the scatter image can therefore yield information about the atomic number of the object. The current work describes the development and early results from a program that extends the flying-spot technique to over-the-road vehicles and large cargo containers, and which includes algorithms designed to facilitate the automatic detection of anomalies in vehicles or containers for which previously obtained baseline data are already on file. The system employs a pair of 450kilovoltX-ray sources and their corresponding detectors to provide two transmission images and two scatter images of the inspected vehicle during a single scan. The results of laboratory testing will be presented, including the ability to penetrate to a minimum 4 inches of steel; field tests results will be presented as available. A possible future upgrade to include a forward scatter imaging modality will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A program of work has been carried out over a number of years to develop X-ray equipment having a full three-dimensional (i.e., binocular stereoscopic) capability. Early equipment produced for H.M. Customs and Excise was based on basic line-scan technology. More recently a system has been developed in collaboration with the PSDB, Home Office, which uses folded array line-scan sensors and has a materials identification capability. Since the information contained in these images can be assumed to exist on a number of identifiable depth planes it can be manipulated in the same way as the slice data available from computed tomography (CT) type equipment. A considerable amount of software is available for use with such CT data which enables image models to be built. The current program of work involves interfacing the new 3-D X-ray technology to the existing software routines in an attempt to automatically produce 2 1/2-D image models from the full stereoscopic (3-D) information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optimal (feedback-free) fusion rule for binary hypothesis testing consists of (binary) Likelihood Radio Quantizers (LRQs) at the peripheral sensors and a LRQ at the fusion, when the observations are statistically independent from sensor to sensor. Feedback introduces correlation between local and global decisions that complicates the optimal fusion design. In this paper we consider the optimal fusion design in the presence of non-hierarchical feedback. Optimal fusion design requires the explicit knowledge of the underlying statistics. The design of feedback-free fusion is considered under reduced statistical knowledge using projection. The constrained optimal linear centralized and distributed fusion rules are derived. Fusion design with projection only requires knowledge of the first two moments of the data and is applicable in cases of limited statistical information about the operational statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this paper is to discuss the issues that are involved in the design of a multisensor data fusion system for surveillance. The system in mind consists primarily of three multifrequency radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of intersensor correlation (ISC) on data fusion is discussed. ISC may have a detrimental effect on the performance of data fusion, in particular if the sign of the correlation coefficient is unknown and unaccounted for. Numerical studies in the literature verify the ISC effects. Using the Gaussian distribution, an analytical framework is developed to explain how ISC affect the fusion performance. Means for reducing the ISC in the fusion are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robust Constant False Alarm Rate (CFAR) distributed detection system that operates in heavy clutter with unknown distribution is presented. The system is designed to provide CFARness under clutter power fluctuations and robustness under unknown clutter and noise distributions. The system is also designed to operate successfully under different-power sensors and exhibit fault-tolerance in the presence of sensor power fluctuations. The test statistic at each sensor is a robust CFAR t-statistic. In addition to the primary binary decisions, confidence levels are generated with each decision and used in the fusion logic to robustify the fusion performance and eliminate weaknesses of the Boolean fusion logic. The test statistic and the fusion logic are analyzed theoretically for Weibull and lognormal clutter. The theoretical performance is compared against Monte-Carlo simulations that verify that the system exhibits the desired characteristics of CFARness, robustness, insensitivity to power fluctuations and fault-tolerance. The system is tested with experimental target-in-clear and target-in-clutter data and its experimental performance agrees with the theoretically predicted behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Fusion for Substance Identification and Security
In the post-Cold War era, Naval surface ship operations will be largely conducted in littoral waters to support regional military missions of all types, including humanitarian and evacuation activities, and amphibious mission execution. Under these conditions, surface ships will be much more isolated and vulnerable to a variety of threats, including maneuvering antiship missiles. To deal with these threats, the optimal employment of multiple shipborne sensors for maximum vigilance is paramount. This paper characterizes the sensor management problem as one of intelligent control, identifies some of the key issues in controller design, and presents one approach to controller design which is soon to be implemented and evaluated. It is argued that the complexity and hierarchical nature of problem formulation demands a hybrid combination of knowledge-based methods and scheduling techniques from 'hard' real-time systems theory for its solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gregory A. Clark, Sailes K. Sengupta, Paul C. Schaich, Robert J. Sherwood, Michael R. Buhl, Jose D. Hernandez, Ronald J. Kane, Marvin J. Barth, David J. Fields, et al.
We have conducted experiments to demonstrate the enhanced detectability of buried land mines using sensor fusion techniques. Multiple sensors, including visible imagery, IR imagery, and ground penetrating radar, have been used to acquire data on a number of buried mines and mine surrogates. We present this data along with a discussion of our application of sensor fusion techniques for this particular detection problem. We describe our data fusion architecture and discuss the some relevant results of these classification methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When human lives are involved it is vital that the man-machine interface be as informative and accurate as possible. Multiple but non-integrated sensors such as those consisting of two dimensional displays that attempt to depict the contents of packages simply do not transfer sufficient data to security personnel. In questionable circumstances officials are forced to use their sixth sense to determine whether or not a particular package or individual should be detained for further investigation. This intuitive process is fortunately adequate enough in most situations but is dependent on a particular individual's level of training, experience, emotional and physical state. The key to airport security success is to understand how an experienced official charged with safeguarding a particular area fuses the data that is presented to him or her. Once this is done, the cognitive process could be significantly automated and the shortcomings associated with the human element could be substantially eliminated. Simply fusing the output of multiple sensors into a central system and then applying an algorithm does not solve the problem. The speed and accuracy of current sensor fusion and A! techniques lag significantly behind what is available in the human mind. That is not to say the technology is not available. It is the appropriate application of it that has not yet been determined. An approach that would first identify which cues (visual and audible) are most important and useful to security personnel is essential. One answer incorporates a head mounted display (HMID), preferably capable of displaying three dimensional graphics, that is worn by personnel charged with protecting a particular port of entry. Depicted in the wide angle HMD would be all available information from standard as well as test sensors. Data could be displayed in multiple formats using a wide array of presentations to provide the maximum amount of information for both the officer and the researcher. To limit clutter the unit would incorporate two features. The first would be scaling, so that the data could be dynamically modified by a particular user. For instance, television cameras that pan an area may not be considered as important as the X-ray images so they could be made smaller. On the other hand if a particular individual wants to zoom in on a camera image to get a closer look at a suspect, this "window" would be increased. Secondly, a head tracker could be incorporated so that the display appears to be continuous. As the users turn their heads the image continues. This feature would be useful in instances where supporting sensor information was desired because of a cue that was shown in a previous sensor "window". Configuration tests must first be conducted using this system. Preliminaiy studies would show what infonnation to include and what else would be desirable. Central to this concept is the notion that the end user, the security official, is intimately involved in the development loop. Once it is determined what information is used as well as how it is used, the automation of the cognitive process could commence yielding an efficient, automated and fully integrated sensor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Fingerprint Identification System (AFIS) provide a means for non-manual fingerprint database searches. Future AFIS applications demand greater fingerprint match request throughput, and for larger fingerprint databases. Barriers to the implementation of a high volume AFIS are analyzed. Data-fusion methods are proposed as a method to maximize integration of fingerprint feature information with limited computational resources. Preliminary results from a prototype AFIS system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a multicontext scene where several objects may be occluded or scenes may change rapidly, a single paradigm for computer vision may not be sufficient. The demand to adjust and learn new environment is therefore a challenging modeling problem in computer vision research. In response to this challenge we have developed a hybrid architecture which combines classical pattern recognition algorithms with fuzzy knowledge-base and Hopfield Neural Network. We also present elementary results obtained from this effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The general features of the ladar and FLIR data measured by the ground-based Tri-Serve Laser Radar (ladar) are introduced. Methods documented about fusion of ladar and FLIR data are reviewed. The automatic target recognition (ATR) system using fusion of ladar and FLIR data developed in our laboratory is presented. The parallel implementation of ATR system is discussed. The results of target segmentation obtained by this data fusion are compared with those obtained by other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite human familiarity with the natural environment, the issue of vision--cognition and perception of objects is still a dominant variable in human performance assessment; especially in real-time, safety-oriented domains such as airport security. In this paper, we present some experimental results that seek to uncover how people interpret computer generated image data at different threshold modes. The results obtained are useful for training airport inspectors or pilots on how to recognize objects in different environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an unsupervised learning artificial neural network, Dignet, is used to design a data fusion for moving target indication (MTI). Dignet is a self-organizing neural network model with simple architecture. The system parameters of Dignet are analytically determined from the self-organization during the learning process. It exhibits fast and stable learning performance. Based on its excellent clustering performance on the statistical pattern recognition, Dignet is used in the design of a multi-sensor data fusion. The data fusion is designed to supplement the decision making of an MTI radar system for multi-target detection. The radar system consists of three different sensors, which receive signals with carrier frequencies located in different bands. The received signals are processed by using digital signal processing techniques, fast Fourier Transform and pulse compression. Features of the received data are extracted from the signal processing stage. The features are then presented to Dignet for data clustering. The well centers and well depths generated by Dignet are then propagated to the fusion center. Another Dignet is used in the data fusion center for second stage clustering. The clusters of patterns created in the second stage clustering are fused by decision making algorithms to make in integrated decision. It is shown that the data fusion for MTI successfully detects and keeps track of multiple moving targets which are embedded in clutter or noisy environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several promising methods and technologies for substance detection. The oldest of these methods is the trained detector or `sniffer' dog. We summarize what is known about the capabilities of dogs in substance detection and recommend comparative testing of the canine- human team with current technology to identify the optimum combination of methods to maximize the detection of explosives and contraband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper gives an overview of the Bayesian network technology and a description of its potential application for data fusion. The Bayesian network technology is a set of representation techniques for encoding uncertain beliefs using probability theory and reasoning techniques for drawing inferences from such representatives. The technology has been successfully applied both to tasks of assessment under uncertainty and tasks of decision- making under uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Described is a high performance library range using 5.25' optical media and drives. There are three members in the family covering the range from 75 gigabytes to 1.3 terabytes. The largest unit, which stores over 1,000 cartridges in a footprint of 0.91 X 1.4 meters, can carry out 800 exchanges per hour. Applications are for inexpensive, near line high reliability storage of computer information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We produced a product design concept for an economical, automatic, shipping container inspection system to be used for detection of contraband, including illicit drugs, and for trade enforcement via shipping manifest confirmation. Using nondestructive, 3D imaging, nuclear techniques the system can see deeply into the cargo by generating a spatial image of an entire container's contents automatically and in real time. Its cost is lower than present inspection methods. The approach divides a container into numerous, small, volume elements that are individually interrogated using pulsed fast neutron analysis. Acquired information is subjected to analysis based on a neutron-gamma physics data base and is enhanced using imaging and discrimination techniques. We have designed, built, and operated a laboratory apparatus which has demonstrated the attractiveness of this approach. Experimental data were found to agree with design expectations derived from computer modeling. By combining selected element signatures and phenomenological measures, together with discrimination algorithms, we demonstrated that a full-scale inspection system needs only 10 - 15 minutes to process an 8 foot X 8 foot X 40 foot container in order to detect hidden contraband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a simple angle matched image formation algorithm is implemented in which an amplitude weighting corresponding roughly to the dominant feature of a man-made target is used to match its angular response. As such, it represents the first step to a complete frequency-angle matched filter image formation method now under development. Using a common set of measured data, comparisons with practical implementations of two conventional imaging/detection techniques studied previously and a single-pixel detection baseline indicate that only the current approach offers significant performance improvement over baseline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.