PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper analyzes the properties of some layered structures formed by cascading layers of Boolean and stack filters and solves the optimal design problem using techniques developed under a training framework. We propose a multilayer filtering architecture, where each layer represents a Boolean or a stack filter and the outputs of the intermediate filtering layers provide some partial solutions for the optimization problem while the final solution is provided by the last layer output. The approach to the optimal design is based on a training framework. Simulations are provided to show the effectiveness of the proposed algorithms in image restoration applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we extend the concept of the Mi's of weighted median filters to stack filters. A fast algorithm is proposed to compute Mi of stack filters. The problem of synthesis of stack filters by rank selection probabilities through the Mi's is addressed. The necessary and sufficient condition for Mi to be a stack filter is presented. A procedure is proposed to synthesize a stack filter by a given set of rank selection probabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a novel class of learning vector quantizers (LVQ) based on multivariate data ordering. Linear LVQ is not the optimal estimator for non-Gaussian multivariate data distributions. Furthermore, it is not robust either in the case of outliers or in the case of erroneous decisions. The novel LVQs use multivariate ordering in order to obtain location estimators that are robust and that provide superior and, in certain cases, optimal performance for non-Gaussian multivariate distributions. A special case of the novel LVQ class is the marginal median LVQ (MM LVQ), which uses the marginal median as multivariate estimator of location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new fast running filtering algorithm of decompositional type for realizing stack filters that are based on a subclass of positive Boolean functions, namely cyclic positive Boolean functions, is proposed. Peculiar to the presented algorithm is the use of Fibonacci p-codes, which make it possible to have a unified approach to running filtering, containing as special cases the complete threshold decomposition (when p is greater than the maximal value of input data) and the binary-tree threshold decomposition (when p equals 0). The choice of optimal value of p depends on the statistics of input data and reduces the complexity of running stack filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a finite number of different weighted order statistic (WOS) filters of a fixed length N. However, even for relatively small values of N, one cannot immediately see if two given WOS filters are the same by simply looking at the weights and the thresholds. This problem is addressed in this paper. We define two WOS filters to be equivalent (the same) if they produce the same output for arbitrary inputs. We shall show that the solution requires the use of integer linear programming and next develop a hierarchical heuristical procedure which may provide a much quicker solution to the given problem. The hierarchy starts with simple checks and proceeds to more and more complicated tests. The procedure is exited as soon as a definite conclusion is reached.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FatBear filter is a nonarithmetic filter for piecewise constant signals in high noise situations. The output of each window takes the pseudo-median of the medians of all sets of N + 1 values with minimum range. The Recursive FatBear filter is the extension of the FatBear filter. It replaces the original signal with the output before shifting the window to the next position. In this paper, we study the properties of the Recursive FatBear filter for pulse width filtering, impulse rejection, and edge enhancement. Experimental examples show that the Recursive FatBear filter is more effective than the FatBear filter in eliminating white Gaussian noise when the signal-to-noise ratio is large and still enhancing the edge. The fixed points of the FatBear filter are shown to also be fixed points of the Recursive FatBear filter. In addition a 2-D hybrid MED-Fatbear filter is presented and theoretical results for the recursive and nonrecursive case are presented for the 2-D case. Applications to synthetic data and to real images are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The realization of the (alpha) -trimmed mean filter on the threshold decomposition architecture is described in this paper. In the threshold decomposition architecture, an input sequence is thresholded at different levels to form a number of binary sequences. A decision is made at each level to determine whether the signal should be above the associated threshold level. The results at all levels are then combined to form the filter output. The decision rule used is the distinguishing feature of an individual filter; the current implementation of the (alpha) -trimmed mean filter is based on using a threshold logic function as the decision rule, thus allowing the decision rule to output multiple values. The statistical properties of the (alpha) -trimmed mean filter are analyzed based on this realization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the edge response of nonlinear filters has been the topic of much research, the behavior of many filters at 2D structures such as sharp corners is not as well understood. We introduce a new technique that defines the response of a 2D filter at corners of any angle. The response measure may be expressed either as fractional preservation or as an attenuation in decibels. Plotting the response measure in polar form illustrates the corner response of a filter both intuitively and quantitatively. The corner response may be computed for a particular size and shape of filter window on a discrete lattice, or may be determined in continuous space for a filter with a specific window shape. The continuous space response measure gives the response of the filter to corners in general, and usually corresponds closely with that determined for the filter on a discrete lattice. The corner responses of several widely used filters (median, averaging, and morphological) are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The choice and detailed design of the structuring elements play a pivotal role in soft morphological processing of images. This paper proposes a learning method for the optimization of the structuring elements of soft morphological filters under given optimization criterium. The learning method is based on simulated annealing. Experimental results depicted herein illustrate that the proposed method can be applied to finding optimal structuring systems in practical situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we expand the statistical properties of grayscale compound function processing (FP) morphological operators. This is achieved by utilizing the basis matrix representation which is an extension of the basis function theorem. It is shown that the basis matrix is skew symmetric and this fact is highly exploited in finding the output density functions of grayscale opening and closing. The proposed method is also applicable to function set processing operators since these operators are a special case of the FP operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Split-beam sonar binary images are inherently noisy and have large quantities of shot noise as well as many missing data points. We address the problem of their restoration via mathematical morphology. Conventional restoration techniques for these types of images do not make use of any of the spatial relationships between data points, such as a qualitative observation that outliers tend to have much larger distances to neighboring pixels. We first define an explicit noise model that characterizes the image degradation process for split-beam sonar images. A key feature of the model is that the degradation is split into two parts, a foreground component and a background component. The amount of noise occurring in the background decreases with distance from the underlying signal object. Thus outliers in the model have the same statistical properties as those observed in training data. Next we propose two different restoration algorithms for these kinds of images based respectively on morphological distance transforms and dilation with a toroid shaped structuring element followed by intersection. Finally we generalize to processing other kinds of imagery where applicable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sieves decompose 1D bounded functions, e.g., f to a set of increasing scale granule functions {dm, m equals 1 ...R}, that represent the information in a manner that is analogous to the pyramid of wavelets obtained by linear decomposition. Sieves based on sequences of increasing scale open-closings with flat structuring elements (M and N filters) map f to {d} and the inverse process maps {d} to f. Experiments show that a more general inverse exists such that {d} maps to f and back to {d}, where the granule functions {d}, are a subset of {d} in which granules may have changed amplitudes, that may include zero but not a change of sign. An analytical proof of this inverse is presented. This key property could prove important for feature recognition and opens the way for an analysis of the noise resistance of these sieves. The resulting theorems neither apply to parallel open-closing filters nor to median based sieves, although root median sieves do `nearly' invert and offer better statistical properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The exact probability density for a windowed observation of a discrete 1D Boolean process having convex grains is found via recursive probability expressions. This observation density is used as the likelihood function for the process and numerically yields the maximum- likelihood estimator for the process intensity and the parameters governing the distribution of the grain lengths. Maximum-likelihood estimation is applied in the case of Poisson distributed lengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of a morphological size distribution is well known. It can be envisioned as a sequence of progressively more highly smoothed images which is a nonlinear analogue of scale space. Whereas the differences between Gaussian lowpass filtered images in scale space form a sequence of approximately Laplacian bandpass filtered images, the difference image sequence from a morphological size distribution is not bandpass in any usual sense for most images. This paper presents a proof that a strictly size band limited sequence can be created along one dimension in an n dimensional image. This result is used to show how an image time sequence can be decomposed into a set of sequences each of which contains only events of a specific limited duration. It is shown that this decomposition can be used for noise reduction. This paper also presents two algorithms which create from morphological size distributions, (pseudo) size bandpass decompositions in more than one dimension. One algorithm uses Vincent grayscale reconstruction on the size distribution. The other reconstructs the difference image sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrete soft morphological filters generalize standard morphological filters on the basis of the structuring set modification. Formally soft morphological operations correspond to the set of standard morphological operations. Further development of soft morphological filters on the basis of the hierarchical description of the structuring system connects morphological operations with pyramid transformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear correlation techniques are useful approaches for template matching. However, they are computationally intensive since large numbers of multiplications are involved in their calculation. This paper introduces a family of rank-order-based criteria (ROBC) which are multiplier free and do not depend on the local average of the image/template. The most primitive member of this family has properties analogous to the properties of the normalized linear correlation. Hence, we call it normalized min-max cross-correlation (NMCC). Experimental results are presented that describe the performance of the introduced criteria in the presence of Gaussian and impulsive noise. These experiments show that the NMCC features sharp and robust indications in the presence of Gaussian noise. Other members of the ROBC family with more rank order terms also are robust with respect to impulsive noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical morphology has been widely used in recent years in the image processing area. Due to the nonlinear nature of morphological operations, their application to some image noise removal problems have achieved very impressive results. However, of the morphological algorithms by far most use only a single structuring element. To optimize the performances on various parts of an image, we introduce morphological operations using an adaptive structuring element for noise removal of binary and grayscale images. These techniques were used in the preprocessing of character recognition problems at CEDAR, SUNY at Buffalo, New York. Demonstrations of the improved performance of the algorithm are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watershed transformation is used in morphological image segmentation. This transformation could be considered as a topographic region growing method. Recently, fast watershed algorithms have been proposed for general purpose computers. They are based on immersion simulations of the image surface, which is considered as a topographic relief. In such a model, the greylevel values of pixels stand for altitude values on the relief. In this paper, the operation of the present fast watershed algorithms is analyzed and a new extension is proposed. Drawbacks of the present algorithms are pointed out, studied, and illustrated with test images. These problems lead, in several cases, to a loss of information about image details and structures or even to unprocessed areas in the image. The new watershed algorithm overcomes these deficiencies and preserves more information about image details. The new algorithm is based on a split-and-merge scheme. It constantly monitors the presence of isolated areas during the immersion simulation, considering them as new catchment basins. Application of the split-and-merge watershed algorithm to marker-based image segmentation is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Until recently, attention has been focused on linear methods for achieving multiscale decomposition. Unfortunately even filters, such as Gaussians, produce decompositions in which information, associated with edges and impulses, is spread over many, or all, scale space channels and this both comprises edge location and potentially pattern recognition. An alternative is to use nonlinear filter sequences (filters in series, known as sieves) or banks (in parallel). Recently multiscale decomposition using both erosion (dilation) and closing (opening) operations with sets of increasing scale flat structuring elements have been used to analyze edges over multiple scales and the granularity of images. These do not introduce new edges as scale increases. However, they are not at all statistically robust in the face of, for example, salt and pepper noise. This paper shows that sieves also do not introduce new edges, are very robust, and perform at least as well as discreet Gaussian filters when applied to sampled data. Analytical support for these observations is provided by the morphology decomposition theorem discussed elsewhere in this volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some optimal approaches to morphological top-hat transform. When using top-hat transform, size estimation of structuring elements is critical in performing tasks such as object segmentation. One typical example is the moving ball algorithm. Since the objects of particular interest possess various size measures, an optimal procedure for selecting structuring functions appears appropriate for the purpose of adaptive thresholding. An optimization design can result in a minimum error according to certain rules in error estimation. In this paper, three cases are considered. The first is the case where the cylindrical type of structuring elements and objects are investigated. The second is on a conical model where a cone is modeled as to optimize top-hat transform. The third case presents optimal algorithm via threshold area or umbra based on a cylindrical model. As is often typical in random geometric modeling, optimization leads very quickly to quite complicated mathematical expressions involving the distributions of the parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global thresholding is widely used in image processing to generate binary images, which are used by various pattern recognition systems. Typically, many features that are present in the original gray-level image are lost in the resulting binary image. This paper presents an adaptive thresholding algorithm, that maximizes the edge features within the gray-level image. The Gaussian pyramid algorithm is used to find the local gray-level variations that are present in the original gray-level image. The resulting Gaussian pyramid image is then subtracted from the original gray-level image removing the local variations in illumination. This new image is then adaptively thresholded using the adaptive contour entropy algorithm. The resulting binary images have been shown to contain more edge features than the binary images generated using global thresholding techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is frequently necessary to preprocess images to remove the background. In this paper a controlled study of background removal by different types of filter is presented. The bandpass characteristics of each type of filter are optimized, and it is shown that the morphological top- hat transform as usually implemented is very sensitive to noise but that replacing the opening (or closing) step by a median sieve (a cascade of median filters with structuring elements of increasing size) removes the background in a much more robust fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm has been developed which uses stochastic relaxation in three dimensions to segment brain tissues from images acquired using multiple echo sequences from magnetic resonance imaging (MRI). The initial volume data is assumed to represent a locally dependent Markov random field. Partial volume estimates for each voxel are obtained yielding fractional composition of multiple tissue types for individual voxels. A minimum of user intervention is required to train the algorithm by requiring the manual outlining of regions of interest in a sample image from the volume. Segmentations obtained from multiple echo sequences are determined independently and then combined by forming the product of the probabilities for each tissues type. The implementation has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment 3D MRI data sets using multiple sclerosis lesions, gray matter, white matter, and cerebrospinal fluid as the partial volumes. Results correspond well with manual segmentations of the same data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of a fuzzy search technique to geophysical data processing. Here, we discuss three kinds of image object search problems: velocity layer search in computerized tomography in cross wells, surface search and reconstruction in 3D seismic images, and critical frequency curve search in digital ionograms. These three kinds of images have a common property: there are no crisp boundaries in these objects. For the first tow applications, we want to search the small 2D regions called layers in a given image and apply a fuzzy segmentation technique called (lambda) -connected segmentation. For the third application, we want to examine all of the reasonable curves in an ionogram; therefore, we have developed a dynamic (lambda) -connected search technique which is combined with the use of genetic algorithms. We can conclude that a fuzzy system approach has important advantages for geophysical data processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The implementation of artificial neural networks (ANN) as CMOS analog integrated circuits shows several attractive features. Stochastic models, especially the Boltzmann Machine, show a number of many attractive features. Recent studies on artificial models point out that classification is their most successful application field, and that real pattern recognition tasks, and especially image processing by artificial neural networks will require large networks. All of the presented implementations of ANN are supposed to be working in ideal conditions but real applications are subject to perturbations. For a digital implementation of ANN perturbation effects could be neglected in a firth order approximation. But for the analog and mixed digital/analog implementation cases, the behavior analysis of the neural network with perturbation conditions is inevitable. Unfortunately, very few papers analyze the behavior of analog neural networks with perturbation or their limitations. In this paper we present the analysis of a Boltzmann Machine model's behavior with physical temperature perturbation. The relation between the T parameter of the Boltzmann Machine model and the physical temperature of circuit has been established. Simulation results are presented and temperature effects compensation is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a neural computation model for histogram-based multithresholding. An optimal thresholding vector is determined which is image dependent. The number of elements in the vector is characterized by the histogram. Since our model is the parallel implementation of maximum interclass variance thresholding, the time for convergence is much faster. Together with a real-time histogram builder, real-time adaptive image segmentation can be achieved. The multithresholding criterion is derived from maximizing the interclass variance and hence the average of the center of gravity of two neighboring class pixel values should be equal to the interclass threshold value. The learning (weight matrix evolution) procedure of the neural model is developed based on the above condition and it is a kind of unsupervised competitive learning. We use a three-layer neural network with binary weight synapses. The number of neurons in the first layer equals that of gray levels of the image and complex number inputs are used because the arguments of second-layer outputs represent the center of gravity of the class. The third-layer neurons receive the argument output of the second layer and give an indication of the reach of the optimum condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows how fuzzy control techniques can be directly applied on adaptive weighted mean filters design. It is possible for proposed filters to adjust the weights to adapt to local data in image in order to achieve maximum noise reduction in uniform areas and preserve details of images as well. This work shows a new approaches for image processing based on fuzzy rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multistage data-dependent center-weighted-median (CWM) filters, which contain a cascade of several adaptive CWM filters based on local statistics, are presented in this paper for image restoration. Window shapes on different stages, which are oriented in different important correlation directions of images, can be different. The selection of a window shape on each stage provides additional flexibility for the design of filters in order to improve the filtering performance. One important merit of the method is to remove noise in all regions of images, including detail regions, but still preserve the details. The performance of the proposed filters is better than the corresponding 2D single-stage filters with the same window size in many cases. Computer simulations are provided to asses the performance of the proposed filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We suggest efficient algorithms for the determination of central, reflection and rotation symmetry measures of convex polygon and for convex polygon decomposition. All the algorithms are based on Minkowski addition. The representation of convex polygons known as a perinietric measure is used for algorithms implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mathematical definition of the skeleton as the locus of centers of maximal inscribed discs is a nondigitizable one. The idea presented in this paper is to incorporate the skeleton information and the chain-code of the contour into a single descriptor by associating to each point of a contour the center and radius of the maximum inscribed disc tangent at that point. This new descriptor is called calypter. The encoding of a calypter is a three stage algorithm: (1) chain coding of the contour; (2) euclidean distance transformation, (3) climbing on the distance relief from each point of the contour towards the corresponding maximal inscribed disc center. Here we introduce an integer euclidean distance transform called the holodisc distance transform. The major interest of this holodisc transform is to confer 8-connexity to the isolevels of the generated distance relief thereby allowing a climbing algorithm to proceed step by step towards the centers of the maximal inscribed discs. The calypter has a cyclic structure delivering high speed access to the skeleton data. Its potential uses are in high speed euclidean mathematical morphology, shape processing, and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the investigation and modeling of human perception of pictorial visual information during landscape interpretation of space and air earth's surface remote images. The conception of iconic sign (element of perceptive clusterization) is taken as a principle. The perception process is modeled from semiotical approach. The division of information into syntactic, semantic, and pragmatic aspects is assumed as a basis. Image syntax (construction) is determined according to spatial distribution of image brightness. Image could be described by formal language, consisting of image structural element's alphabet (such as unreduced element, grain, contour, region), range alphabet, which characterizes some hierarchical ranges of structural element's relations and a number of substitution rules. Such image representation describes its syntax in terms of hierarchical typical fragments of image, takes into consideration image construction and relation's structure, corresponding to human perception. Different types of semantics are discussed. The limiters of algorithmic, heuristic, and creative types are considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.