|
1.INTRODUCTIONImage fusion has been described as a set of techniques that combines images of different spatial resolutions or containing different types of information with the objective of generating new images that enhance the properties of the original images [1]. The aim is to improve data interpretability, either by improving their visual quality - by facilitating the discrimination of certain categories - or by demonstrating the robustness of a given analysis method [2]. In the last case, multispectral images, generally of low spatial resolution, are combined with a panchromatic image of greater spatial resolution, so it is necessary for the two sets of images to be properly registered to allow integration. Another reason for image fusion is that more than 70% of terrestrial observation satellites and a large number of digital aerial cameras are simultaneously equipped with panchromatic and multispectral sensors [3,4], the latter with lower spatial resolution but higher spectral resolution, while the former has just the opposite characteristics, pointing to the complementarity of the two data sets. Several pan-sharpening algorithms have been proposed, and some attempts have been made to classify them, while the way to effectively evaluate the quality of image fusion results has been a challenge to researchers and users of these fused products. However, two approaches have been most widely used in research [5]:
Although traditional remote sensing and GIS programs provide very good tools for the visualisation of spatial data, their analytical capacities are relatively limited and, in many cases, they are not sufficiently flexible and do not represent The state of the art [6]. R [7] is an open source data analysis program and language in which many of the new image processing developments are being implemented because of its power, flexibility, and community of developers and users, among other reasons. Brundson and Comber (2015) argue that R is probably the best environment for spatial data analysis and manipulation [6] and remote sensing is undoubtedly included in this category. Here, we use R to program all fusion and validation algorithms. The general objective of this work is to compare the results of the application of three image fusion techniques (High Pass Filter, Principal Components Analysis, and Gram-Schmidt) in images from four different satellite sensors (Landsat 7, Landsat 8, IKONOS and QuickBird) and an airborne sensor (Intergraph Z/I-Imaging Digital Mapping Camera), with algorithms implemented in R and to evaluate them quantitatively through quality indices, also implemented in R. This objective is divided into four specific objectives: Intergraph Z/I-Imaging Digital Mapping Camera.
2.MATERIALS AND METHODS2.1Analysed imagesQuickBird is a commercial satellite, launched on October 18, 2001, in an heliosynchronous orbit (450 km and 482 km altitude). It has two CCD cameras - one panchromatic and one multispectral (blue, green, red and near infrared), with spatial resolutions of approximately 0.61 m and 2.50 m respectively. The sweeping width covered by these images is between 16.8 km and 18 km according to the orbital height. The radiometric resolution is 11 bits [8]. The analysed image covers an area of 4.63 km2 and corresponds to the city of Azogues (Ecuador), including part of the Burgay river which runs north-south. Several characteristic spots such as the Central Plaza, the Cuenca-Azogues highway, the bus station and the municipal stadium can be distinguished in the image. The size of the panchromatic image is 5677 rows by 2267 columns. IKONOS is a commercial Earth observation satellite launched on September 24, 1999. It was the first satellite to make high-resolution images available to the public, constituting a milestone in remote sensing. In orbit at 681 km altitude, the spatial resolution is one meter (panchromatic) and four meters (multispectral) with four bands (blue, green, red and near infrared). The width of this image is 11 km. The radiometric resolution is 11 bits [9]. In January 2015, DigitalGlobe, the owner of the satellite, announced that, due to problems with quality standards the satellite had been deactivated. The analysed image covers the western part of the city and several rural villages. Land use basically corresponds to urban areas, crops, forests and shrubs. The size of the image is 12,217 rows per 10,599 columns, covering an area of 129.49 km2. Landsat 7 and 8 are part of a constellation of eight satellites that have provided Earth surface information since 1972. The Landsat project has been the most successful space remote sensing project developed so far [2]. The images used in this work are:
The Natmur-08 project was a technical assistance contracted by the Murcia Regional Administration (Region of Murcia - Spain), which consisted of digital photogrammetric images taken by airborne panchromatic and multispectral (R, G, B, NIR bands) sensors and a LiDAR survey for the generation of digital terrain models. The project generated panchromatic images with spatial resolutions of 0.45 m and 2 m respectively. The image used has an extension of 5451 rows by 8401 columns (9.27 km 2) and covers the hamlet of Archivel, belonging to the Municipality of Caravaca de la Cruz, in the Region of Murcia (Spain). 2.2Image fusion methodsIdeally, a good image fusion method should not only increase the spatial resolution of multispectral data, but also preserve as far as possible its spectral integrity [10–13]. In the present study, three fusion algorithms have been implemented: High Pass Filter, Principal Component Analysis and Gram-Schmidt. The reason for choosing them are the good results reported in previous studies; in addition, they represent the main types of image fusion techniques [4, 14–18]. Despite the increasing use of image fusion techniques in remote sensing and the increasing use of R as a data analysis software, there is no R package that implements such techniques. The algorithms were implemented as R [7] functions. We are currently working on creating an R package that will include image fusion techniques, quality assessment methods, test images and a manual. High Pass Filter (HPF) which is counted among space domain image fusion techniques, inserts high frequency components into images of low spatial resolution. The HPF methodology was introduced by Schowengerdt (1980) [19] as a data reconstruction and compression technique, and has recently been extended to new datasets to fuse images of different spatial and spectral resolutions [10, 20, 21]. According to Gangkofner et al. (2008) [15], this technique has generally been implemented in a simplistic manner because the parameters used have not been optimized to achieve satisfactory spatial and radiometric results. The same author proposes an optimization and standardization of the method in order to guarantee its applicability to a wide range of images with different ratios between the multispectral and panchromatic spatial resolutions. Different parameter values were derived from this process. This standardisation method was applied in the research described. The algorithm implemented in R can be summarised in three simple steps [15]:
Principal Components Analysis (PCA) is considered as a component replacement technique. It involves a linear transformation of the multispectral bands, the substitution of a variable in the transformed space, and the inverse transformation to the original space [22]. The justification for this substitution is that the panchromatic image is approximately equal to the first principal component, which contains information that is common to all the bands used as input in the PCA procedure, whereas the unique spectral information of each band is represented in the other components [10]. This substitution maximizes the effect of the high resolution panchromatic band on the fused bands resulting from the process [22]. In summary, to calculate PCA image fusion [23]:
Gram-Schmidt (GS), which is also considered a component substitution method [24] was invented by Laben and Brower in 1998 and patented by Eastman Kodak [12]. It is based on the Gram-Schmidt algorithm, a vector orthogonalization process. In the case of images, each band corresponds to a high-dimensional vector (equal to the number of pixels in the image), which are rotated to produce a new set of uncorrelated vectors (GS1,…, GSn) [25]. The procedure for performing the GS fusion is summarized in five steps, which can be consulted in detail in Laben and Brower, 2000 [12]:
2.3Image fusion evaluationA visual comparison is made between the original and fused images. There are different criteria within the visual analysis [26], and our work takes into account spectral criteria and spatial criteria. As spectral criteria we considered:
The spatial criteria taken into account were as follows: the fused image should keep the sharpness of an objects’ edge and the spatial contrast between different elements without producing the veined textures in the form of small elongated distortions that can appear when a fusion algorithms is applied. Five mosaics (Fig. 1 to 5), one for each platform used, containing a clip of the image in its original versions, HPF fusion, ACP fusion and GS fusion were composed to perform the evaluation presented in Tab. 1. from Zapotillo. Original image (a); HPF fused (b); ACP fused (c) and GS fused (d). Table 1.Qualitative evaluation according to some criteria of visual interpretation in Fig. 1 a 5: 1=very bad; 2=bad; 3=acceptable; 4=good; 5=very good.
The images used in the mosaics correspond, in each case, to the best color composition to discriminate land cover. Such compositions are indicated in Tab. 1. The quantitative evaluation was carried out using three algorithms: the universal image quality index (IQ), the ERGAS spatial index and the spectral ERGAS index. The Q index, although initially proposed for other applications such as image compression, may be useful in image fusion to verify that there has been no significant alteration of the original radiometric values. It allows a quantitative evaluation of the quality of the fused images with respect to the original multispectral images, while calculating the distortions produced. It is expressed as a combination of three factors [27]: loss of correlation, the means of the two images and the contrast distortion [28]. The range of Q is [-1,1]. The best value is 1, which would be given if the two images are identical. The ERGAS index, proposed by Wald (2000) [29], was used to compare the spectral quality of the fused images. It seeks to satisfy three main requirements:
To calculate this index, the original multispectral bands are rescaled to the spatial resolution of the fused bands. The value of ERGAS shows a strong tendency to decrease when the quality of the fused product increases. Values of less than 3 refer to good merge quality [29,30], which improves as it approaches zero. Since the ERGAS index only considers the spectral characteristics of the image, Lillo-Saavedra et al. (2005) [31] proposed a new spatial index, called the spatial ERGAS index, also introducing a spatial RMSE. 3.RESULTSWith respect to the qualitative evaluation, all the fused images are clearly more helpful for visual interpretation of the same (Fig. 1 to 5). The results presented in Tab. 1 appoint to the better evaluation obtained in the images with higher spatial ratio between the multispectral image and the panchromatic image resolutions (QuickBird and IKONOS images with ratio 4 and Natmur-08 with ratio 4.4). However, as will be seen later, these images have a lower quantitative value in relation to the images Landsat 7 and Landsat 8, both with a ratio equal to two. A high visual assessment of the images with a ratio between 4 and 4.4 could be partly explained by the visual perception of the degree of improvement in the fused images as the spatial ratio increases. This, however, raises the question: how reliable is the fusion between multispectral and panchromatic images with a larger spatial ratio (the present research has used images with ratios lower than 4.4)? Landsat 7 and Landsat 8 images show greater colour distortion with the three fusion methods. In the case of the Landsat 7 image, distortions are smaller when the HPF fusion is used (Fig. 3); on the other hand, for the Landsat 8 image, the GS fusion produces the lowest distortion (Fig. 4). For QuickBird, IKONOS and Natmur-08 images the qualitative evaluation reveals better results, especially with HPF and GS both in spectral and spatial terms. The quantitative evaluation gives good results for the three fusion methods (Tab. 2), with a high degree of correlation between the Q Index and the spectral ERGAS index. Table 2.Quantitative evaluation of the fused images.
Q index and spectral ERGAS give a better spectral rating to the HPF fusion in the QuickBird, IKONOS and Landsat 7 images, followed by the GS fusion. However, in the case of Landsat 8 and Natmur-08 images, the results are much more even. This indicates that the results of the evaluation in spectral terms cannot be determinant for the case of the Landsat 8 and Natmur-08 images used in this investigation. Regarding the spatial ERGAS index, ACP fusion was the best option for QuickBird, IKONOS, Landsat-7 and Natmur-08 images, followed by GS. Only for Landsat-8, the GS method produced the best results according to the spatial ERGAS index. With respect to the comparatively low ratings obtained for the QuickBird and IKONOS images with the ERGAS spatial index in the three fusion methods, it is interesting to note that these two images have the highest spatial resolution ratios of the present research. They also present a large proportion of urban coverage, which suggests the importance of a good understanding of the principles involved in the different image fusion methods, as well as of the characteristics of the data to be integrated in the fusion processes [32]. It is interesting to note that whereas for the evaluation of spectral components, HPF fusion tended to present the better results and the ACP fusion worse results, the opposite happens when analysing the spatial components. An average between the spectral and spatial ERGAS indices is also included in Tab. 2, with the GS fusion showing the best results. 4.CONCLUSIONSThe objective of this study was (i) to implement three fusion image techniques (High Pass Filter, Principal Component Analysis and Gram-Schmidt) as open source software (R); (ii) apply these techniques to fuse multispectral and panchromatic images from four satellite platforms (QuickBird, IKONOS, Landsat 7 and Landsat 8) and an airborne platform (Project Natmur-08); and (iii) to evaluate the results qualitatively, by means of a visual comparison, and quantitatively, with three quality indices also implemented in R (the universal index of image quality, ERGAS index and ERGAS spatial index). R, a data analysis open source software, allows large volumes of geospatial data to be managed and to run both simple tasks and complex processes, while maintaining reliability and enabling the implementation of new algorithms such as those proposed in this research. A visual comparison of the fused images in relation to the original images shows the usefulness of applying image fusion methodologies such as those implemented in this work. The qualitative evaluation of the results does not always agree with a quantitative evaluation. Therefore, each of these approaches can provide important analytical information and should be considered in a complementary way when assessing the quality of image fusion. The Q index attributes higher quality to the HPF fusion with the QuickBird, IKONOS and Landsat 7 images. However, with the Landsat 8 and Natmur-08 images, the ACP and GS techniques offer the best results. According to ERGAS (averaging of both values), GS fusion offers good results for IKONOS and Landsat 7 images; for Landsat 8 and Natmur-08 images, ACP and GS present acceptable results; finally, for the QuickBird image, the HPF method produced the best fused results. A good understanding of the principles involved in the different image fusion methods - their advantages and limitations - as well as a good knowledge of the characteristics of the data to be fused will help users to take the necessary precautions when selecting the best methodology to obtain reliable results. ACKNOWLEDGMENTSThis work is partially the result of a postdoctoral contract under the Saavedra Fajardo Program (20023/SF/16) funded by the CARM (Region of Murcia Authority), through the Séneca Foundation-Agency for Science and Technology. The QuickBird image used in this work was provided by the University of the Armed Forces (Ecuador). The IKONOS image from the city of Cuenca was facilitated by the University of Azuay (Ecuador). REFERENCESLiu, J. and Mason, P.,
“Essential Image Processing and GIS for Remote Sensing,”
WILEY-BLACKWELL(2009). Google Scholar
Chuvieco, E.,
“Fundamentals of Satellite Remote Sensing. An Environmental Approach,”
2CRC Press,2016). Google Scholar
Zhang, Y.,
“Understanding image fusion,”
Photogrammetric Engineering & Remote Sensing, 70
(6), 657
–661
(2004). Google Scholar
Zhang, Y. and Mishra, R. K.,
“A review and comparison of commercially available pan-sharpening techniques for high resolution satellite image fusion.,”
in Geoscience and Remote Sensing Symposium (IGARSS),
182
–185
(2012). Google Scholar
Zhang, Y.,
“Methods for Image Fusion Quality Assessment - A Review, Comparison and Analysis,”
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVII, 1101
–1109
(2008). Google Scholar
Brunsdon, C. and Comber, L., An Introduction to R for Spatial Analysis and Mapping, SAGE(2015). Google Scholar
Team, R. D. C., R: A language and environment for statistical computing, R Foundation for Statistical ComputingVienna, Austria,2009). Google Scholar
Digital Globe,
“QuickBird. Data Sheet,”
tech. rep.,
(2013). Google Scholar
Digital Globe,
“IKONOS. Data Sheet,”
tech. rep.,
(2013). Google Scholar
Chavez, P., C, S., and Anderson, J.,
“Comparision of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic,”
Photogrammetric Engineering & Remote Sensing, 57 295
–303
(1991). Google Scholar
Wald, L., Ranchin, T., and Mangolini, M.,
“Fusion of Satellite Images of Different Spatial Resolutions: Assessing the Quality of Resulting Images,”
Photogrammetric Engineering & Remote Sensing, 63
(6), 691
–699
(1997). Google Scholar
Laben, C. and Brower, B.,
“Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening,”
tech. rep., United States Patent, 6
(011), 875
(2000). Google Scholar
Ranchin, T. and Wald, L.,
“Fusion of High Spatial and Spectral Resolution Images: the ARSIS Concept and its Implementation,”
Photogrammetric Engineering & Remote Sensing, 66
(1), 49
–61
(2000). Google Scholar
Karathanassi, V., Kolokousis, P., and Ioannidou, S.,
“A comparison study on fusion methods using evaluation indicators,”
International Journal of Remote Sensing, 28
(10), 2309
–2341
(2007). https://doi.org/10.1080/01431160600606890 Google Scholar
Gangkofner, U. G., Pradhan, P. S., and Holcomb, D. W.,
“Optimizing the High-Pass Filter Addition Technique for Image Fusion,”
Photogrammetric Engineering & Remote Sensing, 74
(9), 1107
–1118
(2008). https://doi.org/10.14358/PERS.74.9.1107 Google Scholar
Yuhendra, H., Kuze, H., and Sri Sumantyo, J.,
“Performance Analyzing of High Resolution Pan-Sharpening Techniques: Increasing Image Quality For Classification Using supervised Kernel support Vector Machine,”
Selected Topics in Power Systems and Remote Sensing, 260
–268
(2010). Google Scholar
Sarp, G.,
“Spectral and spatial quality analysis of pan-sharpening algorithms: A case study in Istanbul,”
European Journal of Remote Sensing, 47 19
–28
(2014). https://doi.org/10.5721/EuJRS20144702 Google Scholar
Cánovas-García, F. and Alonso-Sarría, F.,
“Comparación de técnicas de fusión en imágenes de alta resolución espacial,”
GeoFocus, 14 144
–162
(2014). Google Scholar
Schowengerdt, R.,
“Reconstruction of Multispatial, MuItispectraI Image Data Using Spatial Frequency Content,”
Photogrammetric Engineering & Remote Sensing, 46
(10), 1325
–1334
(1980). Google Scholar
Chavez, P., Guptill, S., and Bowell, J.,
“Image Processing Techniques for Thematic Mapper Data50 th Annual ASP-ACSM Symposium,”
American Society of Photogrammetry(1984). Google Scholar
Cliche, C., Bonn, F., and Teillet, P.,
“Integration of the SPOT Panchromatic Channel into Its Multispectral Mode for Image Sharpness Enhancement,”
Photogrammetric Engineering & Remote Sensing, 51
(3), 311
–316
(1985). Google Scholar
Shettigara, V.,
“A Generalized Component Substitution Technique for Spatial Enhancement of Multispectral lmages Using a Higher Resolution Data Set,”
Photogrammetric Engineering & Remote Sensing, 58
(5), 561
–567
(1992). Google Scholar
Darvishi Boloorani, A., Remotely Sensed Data Fusion as a Basis for Environmental Studies: Concepts, Techniques and Applications, Universität zu Göttingen
(2008). Google Scholar
Aiazzi, B., Baronti, S., Selva, M., and Alparone, L.,
“Enhanced Gram-Schmidt Spectral Sharpening Based on Multivariate Regression of MS and Pan Data,”
3806
–3809
(2006). Google Scholar
Maurer, T.,
“How to Pan-Sharpen Images Using the Gram-Schmidt Pan-Sharpen Method - a Recipe,”
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-1/W1 239
–244
(2013). https://doi.org/10.5194/isprsarchives-XL-1-W1-239-2013 Google Scholar
European Commision,
“Corine Land Cover: Technical Guide,”
(1997). Google Scholar
Wang, Z. and Bovik, A.,
“A Universal Image Quality Index,”
IIEE Signal Processing Letters, 9
(3), 81
–84
(2002). https://doi.org/10.1109/97.995823 Google Scholar
Nussbaum, S. and Menz, G.,
“Object-Based Image Analysis and Treaty Verification,”
Springer(2008). Google Scholar
Wald, L.,
“Quality of high resolution synthesised images: Is there a simple criterion ?,”
Fusion of Earth data: merging point measurements, raster maps and remotely sensed images, 99
–103 SEE/URISCA(2000). Google Scholar
Ozdarici Ok, A. and Akyurek, Z.,
“Evaluation of Image Fusion Methods on Agricultural Lands,”
Journal of Earth Science and Engineering, 1 107
–113
(2011). Google Scholar
Lillo-Saavedra, M., Gonzalo, C., Arquero, A., and Martinez, E.,
“Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain,”
International Journal of Remote Sensing, 26
(6), 1263
–1268
(2005). https://doi.org/10.1080/01431160412331330239 Google Scholar
Švab, A. and Oštir, K.,
“High-resolution Image Fusion: Methods to Preserve Spectral and Spatial Resolu-tion,”
Photogrammetric Engineering & Remote Sensing, 72
(5), 565
–572
(2006). https://doi.org/10.14358/PERS.72.5.565 Google Scholar
|