PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The mental process has fascinated human beings for a long time. Neural dynamics which studies the dynamics of neurons and cell assemblies should play a key role in the mental process. Only recently, the relationship between neural dynamics and mental process are beginning to be established. This relies upon the interdisciplinary collaboration of behavioral, computational, and neurobiological sciences. This paper studies the relationship for the visual process or perceptual processes in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of vision-based robot positioning and tracking is addressed. A general learning algorithm is presented for determining the mapping between robot position and object appearance. The robot is first moved through several displacements with respect to its desired position, and a large set of object images is acquired. This image set is compressed using principal component analysis to obtain a low-dimensional subspace. Variations in object images due to robot displacements are represented as a compact parametrized manifold in the subspace. While positioning or tracking, errors in end-effector coordinates are efficiently computed from a single brightness image using the parametric manifold representation. The learning component enables accurate visual control without any prior hand-eye calibration. Several experiments have been conducted to demonstrate the practical feasibility of the proposed positioning/tracking approach and its relevance to industrial applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Variational and Markov random field (MRF) methods have been proposed for a number of tasks in image processing and early vision. Continuous (variational) formulations have the advantages of being more amenable to analysis and more easily incorporating geometric constraints and invariants. However, discrete (MRF) formulations have computational advantages and are typically used in implementing such methods. Certain commonly used MRF models for image segmentation do not properly approximate a standard continuous formulation in the sense that the discrete solutions may not converge to a solution of the continuous problem as the lattice spacing tends to zero. We propose several modifications of the MRF formulations for which we prove convergence in the continuum limit. Although these MRF models require complex neighborhood structures, we discuss results that indicate that for MRF models with bounded number of states, the difficulties are inherent and cannot be avoided in any scheme with the desired convergence properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EEG signal analysis is a key to the understanding of brain activities. Traditionally, this process involves quantifying the signal in terms of frequency and amplitude, on which basis a number of waveforms have been identified. The complexity of EEG signals warrants the construction of a computer program for automatic interpretation. Symbolic knowledge is being built up for correlating the quantity of certain waveforms and brain behavior, and this knowledge can be readily programmed into a knowledge-based system (expert system) for various purposes such as cognitive research, neurological evaluation, and clinical diagnosis. The presented approach employs a knowledge-based neural network in conjunction with a recurrent neural network model as a memory deice which conducts context processing. This research emphasizes the need for the exploitation of `knowledge' and `context' in signal analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms (GAs) are becoming increasingly popular for signal detection, often in conjunction with neural networks. The time-intensive nature of these techniques has fostered an interest in parallel implementations. Genitor is a widely used algorithm belonging to the class of steady-state GAs which are generally believed to contain little exploitable parallelism. Parallel versions have involved fundamental changes to the algorithm by introducing islands. This paper describes how Genitor can be parallelized virtually as is, with nearly linear speedup, by rearranging the order of some of the genetic operations. An analytical method is derived which can be used for determining the amount of parallelism that can be achieved. An implementation for a shared-memory machine is described, and the resulting execution is shown to support the analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general class of stochastic search algorithms, random heuristic search, is reviewed. A general convergence theorem for this class is proved. Since the simple genetic algorithm is an instance of random heuristic search, a corollary is a result concerning GAs and logarithmic time to convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms are becoming increasingly popular as a tool for optimization in signal processing environments due to their tolerance for noise. Several types of genetic algorithms are compared against a mutation driven stochastic hill-climbing algorithm on a standard set of benchmark functions which have had Gaussian noise added to them. The genetic algorithms used in these comparisons include an elitist simple genetic algorithm, the CHC adaptive search algorithm, and delta coding. Finally several hybrid genetic algorithms are described and compared on a very large and noisy seismic data imaging problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms have been used for many diverse applications. In these applications, possible solutions are represented by linear strings. In many other applications, however, strings cannot adequately model the solutions. A very large class of such problems is in the area of computer vision and image processing. Here the images are 2D and the objects present in them are either 2D or 3D. Hence the strings and the associated genetic operators are not applicable directly. It is necessary to allow the genetic algorithm to operate directly on images or 2D arrays since the underlying processes that are responsible for the formation of the solutions are inherently 2D in nature, e.g. rotation, translation, etc. We have extended the concepts defined for linear strings to 2D chromosomes. Several new concepts have also been developed to describe genetic operators on 2D chromosomes. The traditional genetic operators are also extended and some new geometric operators are also introduced for the 2D chromosomes. A typical computer vision problem is used to demonstrate the use of the operators. A prototype parallel implementation on a CM-2 (SIMD) has been implemented and a CM-5 (MIMD) version is currently being explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper details the application of a parallel genetic algorithm to the air-ground-air frequency assignment problem. Preliminary results indicate that the technique is successful in finding acceptable assignments, satisfying over 90% of constraints, for realistically sized air- ground-air frequency assignment scenarios. Comparisons are made with a classical backtracking and forward checking heuristic algorithm which is shown to be inferior to the genetic algorithm in terms of the execution time required to find reasonable frequency assignments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe segmentation based on textures using the label and image model of D. Geman et al. We replace their maximum a posteriori estimation criteria with a Bayesian estimator that minimizes the sum of the pixel misclassification probabilities. The new estimation goal allows the use of a different computational algorithm based on approximating lattices by trees. An example demonstrating an accurate segmentation of a collage of Brodatz textures is included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A challenging problem in oceanography is the dense estimation of the surface of the ocean in a statistically meaningful manner, given sparse and irregularly sampled measurements of the surface. A previously developed highly efficient multiscale estimation framework is shown to be an appropriate tool for this task, and we demonstrate the manner of application and present experimental results. The algorithm is capable of computing 250,000 surface estimates of the ocean with error statistics in five seconds on a Sparc workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Markov random field techniques for region labeling have become prevalent in image processing research since the seminal work of Geman and Geman in the early 1 980's. Their use in actual working systems, however, has been hampered by a number ofdifficult problems. Perhaps the most intractable of the problems has been the convergence rate of the algorithm. In this paper, we present a technique that introduces stable points in the labeling array of the random field. The stable points are determined by using a simple statistical pixel classifier together with a confidencemeasure at each pixel. The most confident (top 1% )pixellabels are selected and these labels are used to initiate the evolution of the random field. The stable points introduce pockets of "certainty" in the evolution of the process. The labeling is locally stable and even small numbers of stable points vastly decrease convergence rates of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion vector (MV) estimation plays an important role in motion compensated video coding. We propose in this research a new fast MV estimation algorithm by using a statistical approach. Our algorithm consists of two components. First, instead of using a fixed search window centered at the origin, we allow the center and the size of the search window to vary according to the motion vectors obtained in coding previous frames. Second, instead of performing an exhaustive block matching within the search window, we introduce a probability density function which has a peak at the window center and decays as the distance becomes larger, and we generate a few search positions according to the probability density. Experiments are performed to illustrate the excellent performance of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to bandwidth limitations, many remote sensing systems have strengths in either spatial or spectral resolution, an inevitable tradeoff. It is therefore important to have capabilities to merge data and take advantage of the strengths of each. In this paper, we describe a method for enhancing the spatial resolution of multispectral images using a higher resolution panchromatic image. Our method uses the local correlation between low resolution multispectral and corresponding panchromatic radiances to generate sharpened products. The resultant sharpened multispectral images not only have improved visual interpretability but also preserve radiometric fidelity for accurate machine exploitation. This is substantiated by evaluation of the quality of our band sharpening products in terms of machine exploitation accuracy. Results of a sensitivity analysis of the sharpening algorithm to noise are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper contains a brief description of fractal image compression methods with sample compression results. We also present comparative results between two fractal schemes, discrete cosine transform and a wavelet method. We show that, with the PSNR as a measure of image quality, some fractal schemes perform best over the range of compressions of most interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we use the 1D Haar transform fractal estimation algorithm to calculate the local fractal dimension estimates of 2D texture data. The new algorithm provides directed fractal dimension estimates which are used as features for texture segmentation. The method is fast due to the pyramid structure of the Haar transform and nearly optimal in the maximum likelihood sense for fBm data. We compare the low complexity of this new algorithm with the complexity of existing fractal feature extraction techniques, and test our new method on fBm data and real Brodatz textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the case of image restoration preserving discontinuities, a priori information on image structure is represented (Geman and Geman 1984) in the form of a Markov random field consisting of a coupled field of intensity and binary line processes. We propose a new scheme achieving thermal equilibrium of the continuous intensity field. The scheme consists in adding a quasi-static noise process to the intensity field, i.e. a noise with dynamics much slower than characteristic relaxation times of the field, before going through a deterministic minimization. An algorithm is then devised upon this scheme. When associated with a classical Gibbs sampler algorithm for treatment of the line process, it performs global minimization of the energy. We show that the intensity field evolves in thermal equilibrium and we present simulations illustrating thermal equilibrium of the coupled, intensity and line, field. Our algorithm provides better energy minimization than the mixed annealing, a comparable algorithm in terms of computational loads, while retaining the same good parallel implementation perspectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic relaxation has been used previously as the basis for the development of an algorithm to match features extracted from an image with corresponding features from a model. The technique has proved very successful, especially in applications that require real- time performance. On the other hand its use has been limited to small problems, because the complexity of the algorithm varies with the fourth power of the problem size. In this paper, we show how the computational complexity can be much reduced. The matching is performed in two stages. In the first stage, only small subsets of the most salient features are used to provide an initial match. The results are used to calculate projective parameters that relate the image to the model. In the second stage, these parameters are used to simplify the matching of the entire feature sets, in a second pass of the matching algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finite mixture models (mixture models) estimate probability density functions based on a weighted combination of density functions. This work investigates a combined stochastic and deterministic optimization approach of a generalized kernel function for multivariate mixture density estimation. Mixture models are selected and optimized by combining the optimization characteristics of a multi-agent stochastic optimization algorithm, based on evolutionary programming, and the EM algorithm. A classification problem is approached by optimizing a mixture density estimate for each class. Rissanen's minimum description length criterion provides the selection mechanism for evaluating mixture models. A comparison of each class' posterior probability (Bayes rule) provides the classification decision procedure. A 2-D, two- class classification problem is posed, and classification performance of the optimal mixture models is compared with a kernel estimator whose bandwidth is optimized using the technique of least-squares cross-validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two relaxation schemes, a probabilistic and a dictionary-based one, applied to edge detection in images are described. The problem of local edge detection is defined by using a statistical approach. The solution, in terms of statistical decision theory, leads to a multiple, composite, overlapping testing problem that involves configurations of sets of four pixels (quadriplets). The relaxation schemes are also developed using the quadriplets as labeling objects. The initial probabilities for the label set of each object are obtained from the conditional risks given by the local statistical tests. The interaction neighborhood adopted for the two methods is the 4- neighborhood. The iterative label probability updating is performed using a classical heuristic procedure in the two schemes. Tests using noisy synthetic and real images are presented. An experimental analysis of convergence to a consistent and non-ambiguous labeling and speed of convergence is performed for the two schemes and the results are compared. A change in the dictionary according to a modification in the definition of consistency is proposed and the resulting scheme is tested and compared with the two other ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work addresses an optimization approach to sensor fusion and applies the technique to magnetic resonance image (MRI) restoration. Several images are related using a physical model (spin equation) to corresponding basis images. The basis images (proton density and two nuclear relaxation times) are determined from the MRI data and subsequently used to obtain excellent restorations. The method also has been applied to image restoration problems in other domains. All images are modeled as Markov random fields (MRF). Four maximum a posteriori (MAP) restorations are presented. The `product' and `sum' forms for basis (signal) and spatial correlations are discussed, compared, and evaluated for various situations and features. A novel method of global optimization necessary for the nonlinear techniques is also introduced. This approach to sensor fusion, using global optimization, MRF models, and Bayesian techniques, has been generalized and applied to other problem domains, such as the restoration of multiple-modality laser range and luminance signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A classification method has been proposed to recognize images of multiple classes based on algebraic feature extraction and classifier combining techniques. First, an image algebraic feature extraction method is applied to all pairs of classes to extract the image features. Then, a nearest neighbor classifier with a small number of prototypes is designed for each pair of classes based on the algebraic features of training samples. Finally, a neural network technique is used to combine the measurement values of paired classes. Experiments on the U.S. zip code data base show that the method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of remotely sensed satellite data for land surface mapping is a complex pattern recognition problem. Recent work has shown that neural networks often perform better than parametric or statistical classifiers in a large number of cases. Since neural and parametric classifiers are based on very different mathematical models it is appropriate to attempt to integrate them in order to exploit the best aspects of both. A simple method for integrating neural and statistical classifiers effectively is proposed in this paper. This method has been developed with the aim of improving land cover map products derived from multi- sensor data sets. The integration is achieved in a multi-stage process in which two classifiers of different types are initially trained to classify the same multi-sensor training data, and then samples for which the two classifiers are in disagreement are used to train an additional second stage neural classifier. Preliminary results show that significant improvements can be made in overall classification performance compared to using either neural or parametric classifiers alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based 2-D object recognition is investigated by using neural network. Object recognition is treated as a subgraph matching. A neural network system is proposed to complete subgraph matching. The system consists of a large Hopfield network, called global network, and several small Hopfield networks, called subnetworks. The system starts with a randomly set initial state of the global network. The subnetworks are dynamically created based on the stable output state of the global network and then the outputs of the subnetworks are feedbacked to the global network to reset its initial state. This process continues until the whole system is stabilized, where the optimal subgraph matching is obtained. This method avoids the local minimum problem from using a single Hopfield network and also uses much less calculating time than simulated annealing algorithm. Computer simulation is done to verify it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the application of evolutionary search to cascade-correlation learning architectures. Evolutionary programming is used to generate the hidden weights of each candidate hidden unit in the cascade-correlation learning paradigm. The output weights are adapted using deterministic techniques. Evolutionary search is also used to modify the connectivity of each candidate unit so that parsimonious structures may be generated during the neural network construction process. This approach is appealing from a computational perspective since only a population of hidden nodes is being optimized as opposed to a population of neural networks. Results are given for selected low-dimensional examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility of implementing optically a mean field annealing algorithm for nonlinear noise filtering of grey level images has been addressed. To cope with grey level images we have considered an algorithm that corresponds to a Q-state spin-Ising model. The method has been implemented by an optoelectronic feedback loop incorporating a defocusing camera, a spatial light modulator, and a frame grabber. Experimental results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear image restoration techniques induce erroneous detail around sharp intensity changes. Thus, considerable work has centered on nonlinear methods, which incorporate constraints to reduce the artifacts generated in the restoration. In our paper, we examine the applicability of genetic algorithms to solving optimization problems posed by nonlinear image recovery techniques, particularly by maximum entropy restoration. Each point in the solution space is a feasible image, with the pixels as decision variables. Search is multiobjective: the entropy of the estimate must be maximized, subject to constraints dependent on the observed data and image degradation model. We use Pareto techniques to achieve this combined requirement, and problem-oriented knowledge to direct the search. Typical issues for genetic algorithms are addressed: chromosomal representation, genetic operators, selection scheme, and initialization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present the result of using neural network to search for duplicate addresses in a database. The problem is called data de-duplication of database. The problem of duplicated data appears when a distribution company maintains and updates its large-scale customer database. Back-propagation network has been used in our project. Several encoding approaches are developed and tested. Testing results are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.