Region map is the sparse representation of a high-resolution synthetic aperture radar (SAR) image on the middle-level semantic layer in its semantic space. Based on the semantic information of the region map, the high-resolution SAR image is divided into hybrid, structural, and homogeneous pixel subspaces. The segmentation of SAR images can be divided into these three subspaces segmentation, of which the segmentation of hybrid subspace has more challenge because of complex structures. There are often many extremely inhomogeneous areas in the hybrid pixel subspace. Are these nonconnected areas in the same or different classes? To solve this problem, a Bayesian learning model with the constraint of sketch characteristic and an initialization method is proposed to construct a structural vector that can reflect the essential features of each extremely inhomogeneous area. Then, the unsupervised segmentation of the hybrid pixel subspace can be realized by using the structural vectors of these areas in this paper. Theoretical analysis and experimental results show that the performance of the hybrid pixel subspace segmentation realized by the structural vectors based on the Bayesian learning model proposed in the paper is better than that only used by hand designing features.
Multinomial logistic regression (MLR) is an effective classifier in spatial–spectral-based hyperspectral image (HSI) classification. However, in some typical scenarios, such as Gaussian regularized MLR (GSMLR) and Laplacian graph regularized MLR (LPMLR), it hits a large ( cd ) × ( cd ) linear system during the regressors learning procedure that is unbearable in both space and time complexity (c is the number of classes and d is the length of feature). Even if using middle-sized features, it often runs out of memory. To this end, we propose two exact divide-and-conquer (DC) algorithms, DC-GSMLR and DC-LPMLR, to reduce the computation complexity. Both decompose the regressors learning problem into a series of equivalent smaller subproblems, each of which can be solved in closed form. Unlike the approximation ones available, they provide exact merged solutions instead. With the same accuracy, DC-LPMLR and DC-GSMLR only need to solve c + 1 and 2 d × d linear systems, respectively, significantly reducing the peak memory usage by almost O ( c ) and O ( c2 / 2 ) times. For time, experiments on two popular HSI datasets indicate considerable speedup ratio as high as one or two orders of magnitude, showing the practicability in real applications.
In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.
Hyperspectral data are the spectral response of landcovers from different spectral bands and different band sets can be treated as different views of landcovers, which may contain different structure information. Therefore, multiview graphs ensemble-based graph embedding is proposed to promote the performance of graph embedding for hyperspectral image classification. By integrating multiview graphs, more affluent and more accurate structure information can be utilized in graph embedding to achieve better results than traditional graph embedding methods. In addition, the multiview graphs ensemble-based graph embedding can be treated as a framework to be extended to different graph-based methods. Experimental results demonstrate that the proposed method can improve the performance of traditional graph embedding methods significantly.
Markov random field (MRF) model is an effective tool for polarimetric synthetic aperture radar (PolSAR) image classification. However, due to the lack of suitable contextual information in conventional MRF methods, there is usually a contradiction between edge preservation and region homogeneity in the classification result. To preserve edge details and obtain homogeneous regions simultaneously, an adaptive MRF framework is proposed based on a polarimetric sketch map. The polarimetric sketch map can provide the edge positions and edge directions in detail, which can guide the selection of neighborhood structures. Specifically, the polarimetric sketch map is extracted to partition a PolSAR image into structural and nonstructural parts, and then adaptive neighborhoods are learned for two parts. For structural areas, geometric weighted neighborhood structures are constructed to preserve image details. For nonstructural areas, the maximum homogeneous regions are obtained to improve the region homogeneity. Experiments are taken on both the simulated and real PolSAR data, and the experimental results illustrate that the proposed method can obtain better performance on both region homogeneity and edge preservation than the state-of-the-art methods.
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
How to choose effective fusion frames and how to obtain effective fusion coefficients are key problems in image fusion. A novel image fusion scheme is presented based on multiscale decomposition and directional filter banks (DFBs). First, contrast pyramid (CP) decomposition is used for each level of each original image. Then, DFBs are constructed for filter each image. Furthermore, a kind of evolution computation method—the immune clonal selection (ICS) algorithm—is introduced to optimize the fusion coefficients for better fusion products. By applying this technique to fusion of infrared thermal and visual light images, simulation results clearly demonstrate the superiority of this new approach. Fusion performance is evaluated through subjective inspection, as well as objective performance measurements. Experimental results show that the fusion scheme is effective and the fused images are more suitable for further human visual or machine perception.
A novel hybrid image retrieval algorithm, shape-based image retrieval with two angles, is proposed in this paper. The two angles are the direction angle of objects' edges and the relative angle of two lines in objects' boundary approximated with line-pattern. The former is a pixel-level feature while the latter is a line-level feature. These two features are integrated in our algorithm effectively, and they not only represent the detail and whole information of objects, but also are robust under-translation, scaled and rotation version of the database images. Experimental results on an image database with 4000 pictures show our algorithm obtains a good performance.
Based on the study of two kinds of fractal dimension, Differential Box-Counting(DBC) and Multi-Fractal Dimension(MFD), we present a new kind of 2-D histogram of fractal dimensions for images firstly. Then a novel image retrieval algorithm based on texture information, Image Retrieval algorithm using Fractal Dimensions(IRFD), is proposed. Experiments are carried out on an image database with 400 color images. The results show that our algorithm obtains a good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.