JEI Letters

Pan-sharpening method via ARSIS concept under image super-resolution scheme

[+] Author Affiliations
Fan Liu, Licheng Jiao, Shuyuan Yang

Xidian University, Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Xi’an 710071, China

J. Electron. Imaging. 21(3), 030501 (Sep 25, 2012). doi:10.1117/1.JEI.21.3.030501
History: Received April 6, 2011; Revised August 8, 2012; Accepted August 15, 2012
Text Size: A A A

Open Access Open Access

Abstract.  We present a novel fusion method to improve the spatial resolution of multispectral (MS) images, where the fused spectral images integrate the spectral information and spatial details from the original MS images and panchromatic (PAN) image, respectively. Band by band, a spectral image with high resolution is reconstructed from the original spectral image to take advantage of super-resolution technology. The pan-sharpening method via Amélioration de la Resolution Spatiale par Injection de Structures concept is further applied to obtain the fused images from the reconstructed spectral images and PAN image. Performance of the proposed method has been evaluated on the public optical satellite QuickBird images. Experimental results show that the fused spectral images both preserve the spatial details of high-resolution from the PAN image and have higher spectral resolution than the original spectral images.

Figures in this Article

Satellite images can provide a mount of information for a large geographic region. The large number of data transmissions is a critical problem, especially through wireless communication. The structure of the data with low-resolution spectral images and high-resolution panchromatic image reduces the transmission load. However, there remains some problems to be solved. One is that the resolution and image size of source multispectral (MS) images and panchromatic (PAN) image are different. For example, the resolution of the MS and PAN image is 2.44 m and 0.61 m, respectively, which is captured by QuickBird; another is that MS and PAN data sets might present some local dissimilarities.1

Pan-sharpening methods pursue a high-resolution MS image for some applications (classification, cartography, and spectral analysis2). Recent studies of pan-sharpening include component substitution,3 relative spectral contribution,4 Amélioration de la Resolution Spatiale par Injection de Structures (ARSIS),4,5 and some hybrid methods.6 Although most of the existing methods can improve the resolution of spectral images, the image size is still smaller than the PAN image. Even though the MS images are simplified when extended to the same size with PAN image, the local dissimilarities are hardly improved.

In this letter, the fusion procedure is divided into two steps: single image reconstruction and spectral fusion, which integrates the signal of the single reconstructed spectral image with the PAN image. Aiming at the first problem, an image super-resolution scheme is introduced. It is used to construct a high-resolution image from the single low-resolution spectral image.7 This high-resolution image retains the inherent spectral performance of the source low-resolution MS image so that it can avoid the spectral distortion phenomena. To overcome the local dissimilarities between source MS and PAN, we adopt the ARSIS concept1 to gain the ideal fusion.

The original MS images reflect different spectral response. These images can be approximated as independent from each other. The super-resolution scheme7 is an effective way to improve the resolution of a single image. A sparse representation-based method7 is used to reconstruct a high-resolution image from the original spectral image based on the premise that the low- and high-resolution images have similar sparse representation.

Let Bi(i=1,,4) denote one of these original MS images. For each w×w patch bij of Bi with a certain overlap, the sparse representation is obtained according to the optimization problem as Display Formula

minα1s.t.FD1αFbij22ε1,GDhαω22ε2,(1)
where Dl and Dh are the trained dictionaries from the same sparse representations for the low-resolution and high-resolution patch pair, respectively. In Eq. (1), F refers to a feature extraction operator, such as a high-pass filter. It ensures that the sparse coefficients can fit the original low-resolution image patch effectively, and is also an accurate prediction for the reconstructed high-resolution image patch. G represents the effective overlap region between the target patch and reconstructed spectral image and ω is the value of the reconstructed spectral image in this overlap region. The optimization problem can be further similarly reformulated as Display Formula
minαDαbij22+λα1,(2)
where D=[FDlβGDh] and bij=[Fbijβω]. β is a tradeoff parameter which can be simply set 1. Using the optimal solution α* of all the patches and the dictionary Dh, the high-resolution patches can be computed according to Rij=Dhα*. Then the reconstructed spectral image Ri can be generated.

In the reconstruction procedure, the dictionaries Dl and Dh are the two most critical parameters. Yang et al.8 has provided the dictionary training method based on the single-dictionary training method.9 The single-dictionary can be defined as the optimal solution as follows Display Formula

D=argminD,ZIDZ22+λz1s.t.Dt221,i=1,2,,K,(3)
where I consists of t training patch samples.

Suppose the training patch pairs refer to Pt={X,x}, where X and x are the high- and low-resolution training patches in the vector form and each of them has n samples. The training of Dl and Dh can be reformulated as the joint of two optimal constraints as Display Formula

min{Dh,Dl,Z}1NXDhZ22+1MxDlZ22+λ(1N+1M)Z1.(4)
It can be simplified to a single-dictionary training problem as follows Display Formula
min{Dc,Z}XcDcZ22+λ^Z1,(5)
where Display Formula
Xc=[1NX,1Mx]T,Dc=[1NDh,1MDl]T.(6)

Compared to the original spectral image, the reconstructed one has higher resolution, meanwhile its image size is same as the size of the PAN image. However, the PAN image still contains a lot of spatial details which can not be statistically estimated from the single-spectral image with low resolution. The pan-sharpening method achieves the fusion between spectral images and the PAN image. The ARSIS–based method1 is suitable to solve this problem.

The ARSIS-based method is a multiscale method which replaces the detail sub-bands of the spectral image with the ones of the PAN image, where the sub-bands are extracted by the multiscale function such as a wavelet transformation. During the fusion between the reconstructed spectral images and the PAN image, the un-decimated wavelet transform (UWT)10 is selected as the sub-band extractor to decompose the image into an approximate sub-band and a series of detailed sub-bands. For the filter bank of UWT (h,g), the coefficients from high-resolution cj to the next low-resolution cj+1 is obtained using the à trous algorithm11 as Display Formula

cj+1[k,l]=(h¯(j)h¯(j)*cj)[k,l]ω1j+1[k,l]=(g¯(j)h¯(j)*cj)[k,l]ω2j+1[k,l]=(h¯(j)g¯(j)*cj)[k,l]ω3j+1[k,l]=(g¯(j)g¯(j)*cj)[k,l].(7)
If l/2j is a integer, h(j)[l]=h[l]; otherwise it equals 0. In our method, we use the filter bank h=[1,4,6,4,1]/16 and g=δh to decompose the PAN and reconstructed spectral images. Let P be devoted to the PAN image and the coefficients of the sub-bands refer to Ps={ω11,ω12,ω13,,ωJ1,ωJ2,ωJ3,cJ}, where {ωj1,ωj2,ωj3} are the UWT coefficients at scale j, and cJ is the one at the coarsest resolution. The UWT coefficients of Ri are Ris={ωi11,ωi12,ωi13,,ωiJ1,ωiJ2,ωiJ3,ciJ}. Since it notes that {ωj1,ωj2,ωj3} is more precise than {ωij1,ωij2,ωij3} to reflect the spatial details at the scale j, the high-frequency coefficients of Ri are replaced with the ones of P to compute the coefficients RisF={ω11,ω12,ω13,,ωJ1,ωJ2,ωJ3,ciJ}. Considering the definition of g, cj can be reconstructed from cj+1 and {ωj+11,ωj+12,ωj+13} according to Display Formula
cj=cj+1+ωj+11+ωj+12+ωj+13.(8)
The fusion spectral image Fi can be obtained using the inverse UWT from RisF as follows Display Formula
Fi=ciJ+j=1Jk=13ωjk.(9)

Because the energy difference between the MS images and the PAN image, the histogram match is needed before the fusion step. The gray level of all Ri is normalized in [0,1] as follows Display Formula

Ri=(RiRi,min)/(Ri,maxRi,min),(10)
where Ri,max and Ri,min are the maximum and minimum value in the Ri, respectively. Then the gray level of P is normalized according to the similarity of histogram between P and Ri.

Three quality indices are used to evaluate the performance of the proposed method, which consist of the global quality of spectral resolution index Q42 with the ideal value of 1, the erreur relative globale adimensionnelle de synthése (ERGAS),5 and spectral angle mapper (SAM).6 The lower values of ERGAS and SAM, the better the spectral quality the fused image has.

We compare our method on a QuickBird image with another two methods.3,12 In Fig. 1, the color distortion case is shown in the marked region by red circles. Compared with the two methods [Fig. 1(b) and 1(c)], the proposed method [Fig. 1(d)] has less color distortion. The spectral performance of the fusion result in Fig. 1(d) is close to that in Fig. 1(a), except that Fig. 2 and Table 1 show that our method contains more detailed information where the histogram distribution of our method is smoother regardless of the spectral components.

Grahic Jump LocationF1 :

Results of different methods: (a) source MS image (L-MS); (b) IHS; (c) NSCT; and (d) the proposed method.

Grahic Jump LocationF2 :

The R, G, B components histograms of each fusion results.

Table Grahic Jump Location
Table 1Quantitative assessment of different methods.

In this letter, we attempt to decompose the fusion procedure of the satellite image into two relatively independent sections. The final fused spectral images integrate the spectral information from the original MS images and the spatial details of the PAN image. However, the sparse representation is only for the original MS images. The study on the sparse representation of the MS images and PAN image may further reveal the inherent relationship of these images.

This work is supported by the National High Technology Research and Development Program (863 Program) of China (No. 2008AA01Z125 and 2009AA12Z210) and the China Postdoctoral Science Foundation.

Alparone  L. et al., “Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest,” IEEE Trans. Geosci. Remote Sens.. 45, (10 ), 3012 –3021 (2007). 0196-2892 CrossRef
Alparone  L. et al., “Landsat ETM+ and SAR image fusion based on generalized intensity modulation,” IEEE Trans. Geosci. Remote Sens.. 42, (12 ), 2832 –2839 (2004). 0196-2892 CrossRef
Zhou  J., Civco  D. L., Silander  J. A., “A wavelet transform method to merge Landsat TM and SPOT panchromatic data,” Int. J. Remote Sens.. 19, (4 ), 743 –757 (1998). 0143-1161 CrossRef
Choi  M. et al., “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote Sens. Lett.. 2, (2 ), 136 –140 (2005). 1545-598X CrossRef
Zheng  S. et al., “Remote sensing image fusion using multiscale mapped LS-SVM,” IEEE Trans. Geosci. Remote Sens.. 46, (5 ), 1313 –1322 (2008). 0196-2892 CrossRef
Thomas  C. et al., “Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics,” IEEE Trans. Geosci. Remote Sens.. 46, (5 ), 1301 –1312 (2008). 0196-2892 CrossRef
Yang  J. C. et al., “Image super-resolution via sparse representation,” IEEE Trans. Image Process.. 19, (11 ), 2861 –2873 (2010). 1057-7149 CrossRef
Yang  J. et al., “Image super-resolution as sparse representation of raw image patches,” in  Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , pp. 1 –8,  Anchorage, AK  (2008).
Lee  H. et al., “Efficient sparse coding algorithms,” in  Proc. Adv. Neural Inf. Process. Syst. , pp. 801 –808,  Cambridge, MA  (2007).
Starck  J. L., Fadili  J., Murtagh  F., “The undecimated wavelet decomposition and its reconstruction,” IEEE Trans. Image Process.. 16, (2 ), 297 –309 (2007). 1057-7149 CrossRef
Shensa  M. J., “Discrete wavelet transforms: wedding the à trous and Mallat algorithms,” IEEE Trans. Signal Process.. 40, (10 ), 2464 –2482 (1992). 1053-587X CrossRef
da Cunha  A. L., Zhou  J. P., Do  M. N., “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Trans. Image Process.. 15, (10 ), 3089 –3101 (2006). 1057-7149 CrossRef
© 2012 SPIE and IS&T

Citation

Fan Liu ; Licheng Jiao and Shuyuan Yang
"Pan-sharpening method via ARSIS concept under image super-resolution scheme", J. Electron. Imaging. 21(3), 030501 (Sep 25, 2012). ; http://dx.doi.org/10.1117/1.JEI.21.3.030501


Figures

Grahic Jump LocationF1 :

Results of different methods: (a) source MS image (L-MS); (b) IHS; (c) NSCT; and (d) the proposed method.

Grahic Jump LocationF2 :

The R, G, B components histograms of each fusion results.

Tables

Table Grahic Jump Location
Table 1Quantitative assessment of different methods.

References

Alparone  L. et al., “Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest,” IEEE Trans. Geosci. Remote Sens.. 45, (10 ), 3012 –3021 (2007). 0196-2892 CrossRef
Alparone  L. et al., “Landsat ETM+ and SAR image fusion based on generalized intensity modulation,” IEEE Trans. Geosci. Remote Sens.. 42, (12 ), 2832 –2839 (2004). 0196-2892 CrossRef
Zhou  J., Civco  D. L., Silander  J. A., “A wavelet transform method to merge Landsat TM and SPOT panchromatic data,” Int. J. Remote Sens.. 19, (4 ), 743 –757 (1998). 0143-1161 CrossRef
Choi  M. et al., “Fusion of multispectral and panchromatic satellite images using the curvelet transform,” IEEE Geosci. Remote Sens. Lett.. 2, (2 ), 136 –140 (2005). 1545-598X CrossRef
Zheng  S. et al., “Remote sensing image fusion using multiscale mapped LS-SVM,” IEEE Trans. Geosci. Remote Sens.. 46, (5 ), 1313 –1322 (2008). 0196-2892 CrossRef
Thomas  C. et al., “Synthesis of multispectral images to high spatial resolution: a critical review of fusion methods based on remote sensing physics,” IEEE Trans. Geosci. Remote Sens.. 46, (5 ), 1301 –1312 (2008). 0196-2892 CrossRef
Yang  J. C. et al., “Image super-resolution via sparse representation,” IEEE Trans. Image Process.. 19, (11 ), 2861 –2873 (2010). 1057-7149 CrossRef
Yang  J. et al., “Image super-resolution as sparse representation of raw image patches,” in  Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , pp. 1 –8,  Anchorage, AK  (2008).
Lee  H. et al., “Efficient sparse coding algorithms,” in  Proc. Adv. Neural Inf. Process. Syst. , pp. 801 –808,  Cambridge, MA  (2007).
Starck  J. L., Fadili  J., Murtagh  F., “The undecimated wavelet decomposition and its reconstruction,” IEEE Trans. Image Process.. 16, (2 ), 297 –309 (2007). 1057-7149 CrossRef
Shensa  M. J., “Discrete wavelet transforms: wedding the à trous and Mallat algorithms,” IEEE Trans. Signal Process.. 40, (10 ), 2464 –2482 (1992). 1053-587X CrossRef
da Cunha  A. L., Zhou  J. P., Do  M. N., “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Trans. Image Process.. 15, (10 ), 3089 –3101 (2006). 1057-7149 CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.