This paper focuses on the classification of multichannel images. The proposed supervised Bayesian classification
method applied to histological (medical) optical images and to remote sensing (optical and synthetic aperture
radar) imagery consists of two steps. The first step introduces the joint statistical modeling of the coregistered
input images. For each class and each input channel, the class-conditional marginal probability density functions
are estimated by finite mixtures of well-chosen parametric families. For optical imagery, the normal distribution
is a well-known model. For radar imagery, we have selected generalized gamma, log-normal, Nakagami and
Weibull distributions. Next, the multivariate d-dimensional Clayton copula, where d can be interpreted as the
number of input channels, is applied to estimate multivariate joint class-conditional statistics. As a second step,
we plug the estimated joint probability density functions into a hierarchical Markovian model based on a quadtree
structure. Multiscale features are extracted by discrete wavelet transforms, or by using input multiresolution
data. To obtain the classification map, we integrate an exact estimator of the marginal posterior mode.
This paper addresses the problem of the classification of very high resolution (VHR) SAR amplitude images of
urban areas. The proposed supervised method combines a finite mixture technique to estimate class-conditional
probability density functions, Bayesian classification, and Markov random fields (MRFs). Textural features, such
as those extracted by the greylevel co-occurrency method, are also integrated in the technique, as they allow
to improve the discrimination of urban areas. Copulas are applied to estimate bivariate joint class-conditional
statistics, merging the marginal distributions of both textural and SAR amplitude features. The resulting joint
distribution estimates are plugged into a hidden MRF model, endowed with a modified Metropolis dynamics
scheme for energy minimization. Experimental results with COSMO-SkyMed and TerraSAR-X images point out
the accuracy of the proposed method, also as compared with previous contextual classifiers.
In this paper we develop a novel classification approach for high and very high resolution polarimetric synthetic
aperture radar (SAR) amplitude images. This approach combines the Markov random field model to Bayesian
image classification and a finite mixture technique for probability density function estimation. The finite mixture
modeling is done via a recently proposed dictionary-based stochastic expectation maximization approach for
SAR amplitude probability density function estimation. For modeling the joint distribution from marginals
corresponding to single polarimetric channels we employ copulas. The accuracy of the developed semiautomatic
supervised algorithm is validated in the application of wet soil classification on several high resolution SAR
images acquired by TerraSAR-X and COSMO-SkyMed.
In the context of remotely sensed data analysis, a crucial problem is represented by the need to develop accurate
models for the statistics of pixel intensities. In this work, we develop a parametric finite mixture model for
the statistics of pixel intensities in high resolution synthetic aperture radar (SAR) images. This method is
an extension of previously existing method for lower resolution images. The method integrates the stochastic
expectation maximization (SEM) scheme and the method of log-cumulants (MoLC) with an automatic technique
to select, for each mixture component, an optimal parametric model taken from a predefined dictionary of
parametric probability density functions (pdf). The proposed dictionary consists of eight state-of-the-art SAR-specific
pdfs: Nakagami, log-normal, generalized Gaussian Rayleigh, Heavy-tailed Rayleigh, Weibull, K-root,
Fisher and generalized Gamma. The designed scheme is endowed with the novel initialization procedure and
the algorithm to automatically estimate the optimal number of mixture components. The experimental results
with a set of several high resolution COSMO-SkyMed images demonstrate the high accuracy of the designed
algorithm, both from the viewpoint of a visual comparison of the histograms, and from the viewpoint of
quantitive accuracy measures such as correlation coefficient (above 99,5%). The method proves to be effective
on all the considered images, remaining accurate for multimodal and highly heterogeneous scenes.
Land surface temperature (LST) and sea surface temperature (SST) are important quantities for many environmental
models, and remote sensing is a feasible and promising way to estimate them on a regional and global
scale. In order to estimate LST and SST from satellite data many algorithms have been devised, most of which
require a-priori information about the surface and the atmosphere. However, the high variability of surface and
atmospheric parameters causes these traditional methods to produce significant estimation errors, thus making
their application on a global scale critical. A recently proposed approach involves the use of support vector
machines (SVMs). Based on satellite data and corresponding in-situ measurements, they generate an approximation
of the relation between them, which can be used subsequently to estimate unknown surface temperatures
for additional satellite data. Such a strategy requires the user to set several internal parameters.
In this paper a method is proposed for automatically setting these parameters to values that lead to minimum
estimation errors. This is achieved by minimizing a functional correlated to regression errors (i.e., the "spanbound"
upper bound on the leave-one-out error) which can be computed using only the training set, without the
need for a further validation set. In order to minimize this functional, the Powell's algorithm is used, because
it is applicable also to nondifferentiable functions. Experimental results generated by the proposed method turn
out to be very similar to those obtained by cross-validation and by a grid search for the parameter configuration
yielding the best test-set accuracy, although with a dramatic reduction in the computational times.
This paper investigates an ensemble framework which is proposed for accurate classification of hyperspectral
data. The usefulness of the method, designed to be a simple and robust supervised classification tool, is assessed
on real data, characterized by classes with very similar spectral responses, and limited amount of ground truth
labeled training samples. The method is inspired by the framework of the Random Forests method proposed
by Breiman (2001). The success of the method relies on the use of support vector machines (SVMs) as base
classifiers, the freedom of random selection of input features to create diversity in the ensemble, and the use of
the weighted majority voting scheme to combine classification results. Although not fully optimized, a simple
and feasible solution is adopted for tuning the SVM parameters of the base classifiers, aiming its use in practical
applications. Moreover, the effect of an additional pre-processing module for an initial feature reduction is
investigated. Encouraging results suggest the proposed method as promising, in addition to being easy to
implement.
Change-detection methods represent powerful tools for monitoring the evolution of the state of the Earth's surface.
In order to optimize the accuracy of the change maps, a multiscale approach can be adopted, in which
observations at coarser and finer scales are jointly exploited. In this paper, a multiscale contextual unsupervised
change-detection method is proposed for optical images, which is based on discrete wavelet transforms and
Markov random fields. Wavelets are applied to the difference image to extract multiscale features and Markovian
data fusion is used to integrate both these features and the spatial contextual information in the change-detection
process. Expectation-maximization and Besag's algorithms are used to estimate the model parameters. Experiments
on real optical images points out the improved effectiveness of the method, as compared with single-scale
approaches.
The use of remotely sensed imagery for environmental monitoring naturally leads to operate with multitemporal images of the geographical area of interest. In order to generate thematic maps for all acquisition dates, an unsupervised classification algorithm is not effective, due to the lack of knowledge about the thematic classes. On the other hand, a detailed analysis of all the land-cover transitions is naturally accomplished in a completely supervised context, but the ground-data requirement involved by this approach is not realistic in case of short rivisit time. An interesting trade-off is represented by the partially supervised approach, exploiting ground truth only for a subset of the acquisition dates. In this context, a multitemporal classification scheme has been proposed previously by the authors, which deals with a couple of images of the same area, assuming ground truth to be available only at the first date. In the present paper, several modifications are proposed to this system in order to automatize it and to improve the detection performances. Specifically, a preprocessing algorithm is developed, which addresses the problem of mismatches in the dynamics of images acquired at different times over the same area, by both automatically correcting strong dynamics differences and detecting cloud areas. In addition, the clustering procedures integrated in the system are fully automatized by optimizing the selection of the numbers of clusters according to Bayesian estimates of the probability of correct classification. Experimental results on multitemporal Landsat-5 TM and ERS-1 SAR data are presented.
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on Synthetic Aperture Radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In the present paper, an
innovative parametric estimation methodology for SAR amplitude data is proposed, that takes into account the physical nature of the scattering phenomena generating a SAR image by adopting a generalized Gaussian (GG) model for the backscattering phenomena. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions, and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude probability density function better than several previously proposed parametric models for backscattering phenomena.
A general problem of supervised remotely sensed image classification assumes prior knowledge to be available for all thematic classes that are present in the considered data set. However, the training set representing this prior knowledge usually does not really describe all the land cover typologies in the image and the generation of a complete training data set would be a time-consuming, difficult and expensive task. This problem may play a relevant role in remote sensing data analysis, since it affects the classification performances of supervised classifiers, that erroneously assign each sample drawn from an unknown class to one of the known classes. In the present paper, a classification strategy is proposed, which allows the identification of samples of unknown classes, through the application of a suitable Bayesian decision rule. The proposed approach is based on support vector machines for the estimation of probability density functions and on a recursive procedure to generate prior probabilities estimates for both known and unkown classes. For experimental purposes, both a synthetic and a real data set are considered.
The task of the analysis of hyperspectral data, due to their high spectrla reolution, requires dealing with the problem of the curse of dimenioality. Many feature selection/extraction techniques have been developed, which map the hyperdimensional feature space in a lower-dimensional space, based on the optimization of a suitable criterion function. This paper studies the impact of several such techniques and of the criterion chosen on the accuracy of different supervised classifiers. The compared methods are the 'Sequential Forward Selection' (SFS), the 'Steepest Ascent' (SA), the 'Fast Constrained Search' (FCS), the 'Projection Pursuit' (PP) and the 'Decision Boundary Feature Extraction' (DBFE), while the considered criterion functions are standard interclass distance measures. SFS is well known for its conceptual and computational simplicity. SA provides more effective subsets of selected features at the price of a higher computational cost. DBFE is an effective transformation technque, usually applied after a preliminary feature-space reduction through PP. The experimental comparison is performed on an AVIRIS hyperspectral data set characterized by 220 spectral bands and nine ground cover classes. The computational time of each algorithm is also reported.
An unsupervised change detection problem can be viewed as a classification problem with only two classes corresponding to the change and no-change areas, respectively. Due to its simplicity, image differencing represents a popular approach for change detection. It is based on the idea to generate a difference image that represents the modulus of the spectral change vector associated to each pixel in the study area. To separate the change and no-change classes in the difference image, a simple thresholding-based procedure can be applied. However, the selection of the best threshold value is not a trivial problem. In the present work, several simple thresholding methods are investigated and compared. The combination of the Expectation-Maximization algorithm with a thresholding method is also considered with the aim of achieving a better estimation of the optimal threshold value. For experimental purpose, a study area affected by a forest fire is considered. Two Landsat TM images of the area acquired before and after the event are utilized to reveal the burned zones and to assess and compare the above mentioned unsupervised change detection methods.
Classifier fusion approaches are receiving increasing attention for their capability of improving classification performances. At present, the usual operation mechanism for classifier fusion is the “combination” of classifier outputs. Improvements in performances are related to the degree of “error diversity” among combined classifiers. Unfortunately, in remote-sensing image recognition applications, it may be difficult to design an ensemble that exhibit an high degree of error diversity. Recently, some researchers have pointed out the potentialities of “dynamic classifier selection” (DCS) as an alternative operation mechanism. DCS techniques are based on a function that selects the most appropriate classifier for each input pattern. The assumption of uncorrelated errors is not necessary for DCS because an “optimal” classifier selector always selects the most appropriate classifier for each test pattern. The potentialities of DCS have been motivated so far by experimental results on ensemble of classifiers trained using the same feature set. In this paper, we present an approach to multisensor remote-sensing image classification based on DCS. A selection function is presented aimed at choosing among classifiers created using different feature sets. The experimental results obtained in the classification of remote-sensing images and comparisons with different combination methods are reported.
In order to apply the statistical approach to the classification of multisensor remote sensing data, one of the main problems lies in the estimation of the joint probability density functions (pdfs) f(X|?k) of the data vector X given each class ?k, due to the difficulty of defining a common statistical model for such heterogeneous data. A possible solution is to adopt non-parametric approaches which rely on the availability of training samples without any assumption about the statistical distributions involved. However, as the multisensor aspect involves generally numerous channels, small training sets make difficult a direct implementation of non-parametric pdf estimation. In this paper, the suitability of the concept of dependence tree for the integration of multisensor information through pdf estimation is investigated. First, this concept, introduced by Chow and Liu, is used to provide an approximation of a pdf defined in an N-dimensional space by a product of N-1 pdfs defined in two-dimensional spaces, representing in terms of graph theoretical interpretation a tree of dependencies. For each land cover class, a dependence tree is generated by minimizing an appropriate closeness measure. Then, a non-parametric estimation of the second order pdfs f(xjxj,?k) is carried out through the Parzen approach, based on the implementation of two-dimensional Gaussian kernels. In this way, it is possible to reduce the complexity of the estimation, while capturing a significant part of the interdependence among variables. A comparative study with two other non-parametric multisensor data fusion methods, namely: the Multilayer Perceptron (MLP) and K-nearest neighbors (K-nn) methods, is reported. Experimental results carried out on a multisensor (ATM and SAR) data set show the interesting performances of the fusion method based on dependence trees with the advantage of a reduced computational cost with respect to the two other methods.
A new sub-optimal search strategy suitable for feature selection in high-dimensional remote-sensing images (e.g. images acquired by hyperspectral sensors) is proposed. Such a strategy is based on a search for constrained local extremes in a discrete binary space. In particular, two different algorithms are presented that achieve a different trade-off between effectiveness of selected features and computational cost. The proposed algorithms are compared with the classical sequential forward selection (SFS) and sequential forward floating selection (SFFS) sub-optimal techniques: the first one is a simple but widely used technique; the second one is considered to be very effective for high-dimensional problems. Hyperspectral remote-sensing images acquired by the AVIRIS sensor are used for such comparisons. Experimental results point out the effectiveness of the presented algorithms.
The acquisition of data from satellites is of great interest for the importance that the recognition of these data has in different application environments such as geology, hydrology, town planning, observation of agricultural sites or forests, and others. This paper faces the problem of the statistical classification of SAR images. To this end in the literature different methods have been proposed such as the K-NN or the maximum likelihood. Their use allows to achieve fast classification maps but the accuracy obtained is often not satisfactory enough. The reason is that those methods do not fully exploit the spatial correlation information because they use classical features that do not capture this property. Moreover the classical approaches make use of a fixed set of features which do not allow optimal classification. This fact is even more evident for SAR images, where classes are overlapped. In this paper, the use of classical features as the sample mean and the sample variance, which exploit the spatial correlation property, will be shown within a statistical image classification framework. The parametric feature estimators, together with a brief description of the developed classification algorithm, are presented. Throughout the paper the usual hypothesis of independent samples is not applied, due to the strong texture characteristics of SAR images. Finally a section containing results of classification tests followed by a short discussion can be found.
The k-NN rules and their modifications offer usually very good performance. The main disadvantage of the k-NN rules is the necessity of keeping the reference set (i.e. training set) in the computer memory. Numerous algorithms for the reference set reduction have been already created. They concern the 1-NN rule and are based on the consistency idea. The 1-NN rule operating with a consistent reduced set classifies correctly, by virtue of consistency, all objects from the original reference set. Quite different approach, based on partitioning of the reference set into some subsets, was proposed earlier by the present authors. The gravity centers of the subsets form the reduced reference set. The paper compares the effectiveness of the two approaches mentioned above. Ten experiments with real data concerning remote sensing data are presented to show the superiority of the approach based on the reference set partitioning idea.
In this paper an algorithm to detect changes in multispectral and multitemporal remote-sensing images is presented. Such an algorithm makes it possible to reduce the effects of 'registration noise' on the accuracy of change detection. In addition, it can be used to reduce the number of typologies of detected changes in order to better point out the changes under investigation.
The k-NN rules and their modifications offer usually very good performance. The main disadvantage of the k-NN rules is the necessity of keeping the reference set (i.e. training set) in the computer memory. In the present paper a method is proposed to reduce the size of the reference set without decreasing the classification quality. Ten different experiments with very large real data sets were performed to check the effectiveness of the new approach. Each experiment involved 5 classes, 15 features, 2440 objects in the training set and 6399 objects in the testing set. The obtained results show that the decision rule based on the condensed reference set can offer even better classification quality than the one derived from the original data set.
In this paper the problem of detecting land cover changes by using multitemporal remote sensing images is addressed. An approach aimed to explicitly identify what kind of land cover transition has actually taken place in an area proposed. This approach is based on the compound classification of multitemporal images. In particular, a simple model to represent the probabilities of transition is exploited to strongly simplify the compound classification task. The effectiveness of the proposed approach is confirmed by experimental results obtained by using remote sensing images containing simulated land cover transitions.
A parallel network of modified 1-NN classifiers and fuzzy k-NN classifiers is proposed. All the component classifiers decide between two classes only. They operate as follows. For each class i a certain area Ai is constructed. If the classified point lies outside of each area Ai, then the classification is refused. When it belongs only to one of the areas Ai, then the classification is being performed by 1-NN rule. Points that lie in an overlapping area of some areas Ai, are classified by the fuzzy k-NN rule with hard (nonfuzzy) output. Two feature selection sessions are recommended. One to minimise the size of overlapping areas, another to minimise an error rate for the fuzzy k-NN rule. The aim of this work is to create a classifier that is nearly as fast as 1-NN rule and which performance is as good as that for the fuzzy k-NN rule. The effectiveness of the proposed approach was verified on a real data set containing 5 classes, 15 features and 2440 objects.
A great amount of parameters can be derived from the original bands of multispectral remotely-sensed irnages. In particular, for classification purposes it is important to select which of these parameters allow the classes of interest to be well separated in the feature space. In fact, both classification accuracy and computational efficiency rely on the set of features used. Unfoltunately, as spectral responses are strongly influenced by various environmental factors (e.g., atmosphere interferences and non- homogeneous sunshine distribution) the derived parameters depend not only on the considered classes but also on the peculiar characteristics of analyzed images. Even if many studies have been carried out both to identify more stable parameters and to correct images, the problem is still open. It cannot be a-priori solved on the basis of the only ground classes considered, but an ad-hoc selection is required for each image to be classified. In literature, several feature-selection criteria have been proposed. In this paper, a critical review of different techniques to accomplish feature-selection for remote-sensing classification problems is presented. To preserve the physical meaning of selected features only criteria that do not make transformation of the feature space are considered. Most of such criteria were originally defined to evaluate the separability among couple of classes. A formal extension of these techniques based on the statistical theory to face also multiclass cases is considered and compared with traditional heuristic extensions. Finally, with the aim to give a good approximation of the Bayes error probability a new feature-selection criteria is proposed. Preliminary tests carried out on a multispectral data-set witness its potentialities.
A highly informative content makes visible and infrared images the most used remotely sensed data (generally speaking) in earth resource and environmental analysis. On the other hand, sensitivity to surface roughness, water content, and independence of weather conditions and sunlight are the features that justify the growing interest and use of microwave radar data. The previous considerations clearly indicate data fusion as a key point for remote-sensing image classification. In this paper, a knowledge-based system to exploit such numerous and diverse sources of information is proposed. The authors started with the problem of fusing Landsat- MSS and Seasat-SAR images for terrain classification in order to increase the reliability of results with respect to single-sensor analysis. A new approach to the fusion of 2-D images, called the ''region overlapping'' technique, is employed, and its advantages for terrain classification are shown. Experimental results are presented and discussed to show the interest of the approach.
Arturo de Salabert, Timothy Pike, F. Sawyer, I. Jones-Parry, A. Rye, Clare Oddy, D. Johnson, David Mason, A. Wielogorski, T. Plassard, Sebastiano Serpico, N. Hindley
As the number of remote sensing sensors and the volume of data they collect increase, it becomes more important that systems for the automatic analysis of images are developed. The MuSIP project has developed a proof-of-concept software demonstrator for the fusion and analysis of remotely sensed images within a knowledge-based environment. The target application is the monitoring of forestry using a test dataset comprising optical and radar images spanning a 12 year period. The image data will be supplemented by other spatial information including digital map and geographic data as available. The MuSIP system is being implemented on a SUN-4 workstation with selected low level algorithms accelerated by a transputer based processor. The first phase of the MuSIP project was a 2 year (30 man year) ESPRIT project completed in February 1991.
In the last few years, knowledge-based systems devoted to image understanding have proved very interesting applications of artificial intelligence to real-life domains. Purpose of this paper is to present the control strategies and knowledge representation for supporting backtracking techniques for error handling in this kind of system. Since neither low-level image processing nor high-level interpretation can be completely free from errors, the capability of detecting and correcting errors is of main importance to obtain high performances and reliability. Some examples of errors at both levels are considered and the mechanism to solve them is proposed. Experimental results in a medical application are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.