|
1.IntroductionDivergences in the reflectance of remote sensing images for an area can indicate land cover change. Thus, the existing change detection methods often determine whether there is change, according to the radiometric differences between the images.1,2 The physically unchanged ground object in two images acquired at different times can lead to variation in spectral values.3 The reason is that acquisition conditions are different, such as status or posture of a sensor, solar illuminance, observation angles, atmospheric scattering, and absorption.4,5 This problem presents challenges in multitemporal image processing and analysis.6 Therefore, in practical remote sensing applications, radiometric normalization is conducted to eliminate the radiometric discrepancy between images caused by acquisition conditions rather than actual changes in ground objects.7,8 Radiometric normalization can be divided into two types: absolute radiometric correction and relative radiometric normalization (RRN).9–12 Absolute radiometric correction based on single image reveals actual surface response by removing the influence of the atmosphere.5,13 However, to accurately estimate atmospheric effects, it is necessary to obtain atmospheric properties at the time of data collection, such as air temperature, relative humidity, atmospheric pressure, visibility, altitude, and elevation, which tends to be field measured or acquired through other data.13–16 To the contrary, relative radiometric normalization aims at minimizing the radiometric differences caused by inconsistencies of acquisition conditions between images. It is used for multitemporal images. In a relative radiometric normalization, one of the images is considered as the reference image and all other images are normalized in such a way that they become radiometrically similar to the reference image.12,17 Difficulty to collect the necessary ancillary data, along with the lack of historical data, has further reduced the opportunity to perform absolute radiometric correction on multitemporal images. Fortunately, since RRN does not require any ancillary data and is easier to achieve than absolute correction, it has been widely used to normalize images obtained at different times.18,19 In some applications like change detection and classification, it has been demonstrated that some RRN methods performed better than absolute radiometric corrections.13,18 Commonly used RRN methods consist of two types: nonlinear normalization and linear normalization.20 Nonlinear methods include histogram matching (HM).8 This approach can cause gray scale loss and a disordered overall radiation distribution since it achieves correction by matching the histogram of a target image with that of the reference image.20 Linear methods include minimum–maximum (MM),9 mean-standard (MS) deviation,9 haze correction (HC),21 image regression (IR),22,23 pseudoinvariant feature (PIF),11,24–26 dark set-bright set (DB),17 and no-change set (NC)27 techniques. Most of these methods (HM, MM, MS, HC, and IR) use all of the pixels in the estimation of normalization coefficients.28 Such methods do not often perform as well as methods using invariant pixels8,28,29 and can lead to low change detection accuracy since the radiometric difference caused by physical ground change is normalized. Those methods using unchanged pixels, however, are time and labor consuming due to the selection of invariant pixels, and furthermore, the quality of the chosen samples directly affects the relative radiometric correction results.30,31 To control the quality of selected invariant pixels and reduce the time and labor cost, some researchers have proposed improved methods.32–34 These invariant pixels selection methods include slow feature analysis,31 Kauth–Thomas transformation,17 scattergram-controlled regression,27 temporally invariant cluster,18 principal component analysis (PCA),4 multivariate alteration detection (MAD),35 and iteratively reweighted multivariate alteration detection (IR-MAD).36 These methods can increase the quality and number of invariant pixels, as well as reduce human intervention and subjectivity. Canty et al.35 applied MAD to define automatically the invariant pixels within multispectral images of the same area collected at two different times. Their results showed that automatically obtained invariant features generated better results than those produced by manual selection. To improve the sensitivity of MAD, Nielsen et al.36 proposed IR-MAD. Not only does this method automatically select invariant features but also determine an adaptive threshold through an iterative process. As a consequence, IR-MAD is an effective method to select unchanged pixels. Mateos et al.37 radiometrically normalized multitemporal remote sensing images using the IR-MAD algorithm. Canty and Nielsen38 applied IR-MAD for the normalization of LANDSAT and ASTER multitemporal images. These RRN methods assume the invariant pixels’ values in the target image linearly related to those of the reference image because of simplification of modeling. However, in fact, the relation does not follow a linear model typically. Thus, the linear assumption will negatively affect the normalization results. Furthermore, these methods can result in loss of high frequency details. Frequency domain transforms, such as the Fourier,39 wavelet,40 and contourlet41 transforms, were applied in relative radiometric normalization to overcome the limitations of conventional methods. Biday and Bhosle42 used Fourier and wavelet transforms to separate high and low frequency components in images; the effectiveness was validated in comparative experiments with two other relative radiometric normalization methods. Sun et al.43 proposed an RRN method based on wavelet transform and low pass filter (WLPF) that effectively improved the change detection accuracy. Li et al.20 presented a method for relative radiometric consistency processing, based on object-oriented smoothing and contourlet transforms, concluding that the proposed method can improve the visual effects of normalized images thus increasing the accuracy of change detection. To overcome the limitations of existing methods, we propose a relative radiometric normalization method based on wavelet transform and IR-MAD (WIRMAD). Wavelet transform is used to divide images into spatial high frequency and low frequency components. We automatically extract invariant features from the low frequency components, and then a linear regression equation is used to normalize the low frequency component of the target image to the reference image. A reverse wavelet transform is applied to reconstruct the final normalized image. We compared the proposed method visually and empirically to the traditional HM, MS, IR-MAD, and WLPF methods in experiments with three pairs of images in China at different spatial resolutions. Furthermore, change detection was conducted on these images to evaluate the RRN quality of the proposed method. The remainder of this paper is as follows: Sec. 2 describes the datasets and Sec. 3 details the proposed relative radiometric normalization method based on wavelet transform and IR-MAD, followed by the experimental results and analysis in Sec. 4. Section 5 discusses the application and limitations of the proposed method, and the conclusion is given in Sec. 6. 2.DatasetsFour pairs of bitemporal images at different spatial resolutions were employed in the experiments, one from Nanjing for normalization and three for change detection from Shenzhen, Wuhan, and Guangxi in China, respectively. The information of the datasets used in this paper is summarized in Table 1. Table 1Description of data sets using in this paper.
For each image pairs, we geometrically registered the target image and reference image with error controlled to within one pixel before normalization and change detection. To evaluate the change detection accuracy, true change maps were produced by visual interpretation. Auxiliary images from Google Earth were used to ensure the accuracy of true change maps. 3.MethodologyIn this paper, we present an RRN method combined wavelet transform and IR-MAD algorithm. Figure 1 illustrates the relative radiometric normalization approach used in this study in a general way; we discuss the major steps in detail in Secs. 3.1–3.3. 3.1.Wavelet TransformThe low frequency of remote sensing image corresponds to the holistic background radiation information, and the high frequency is the foreground target, texture information. Therefore, the purpose of global radiometric correction can be achieved by eliminating the difference between the low frequency components of the target and reference images. Keeping the high frequency components unprocessed can preserve high frequency information. Wavelet transform is one of the commonly used algorithms in frequency domain processing, developed in signal processing theory to help extract information from many different kinds of data.44 In this paper, wavelet transform is used to separate the spatially low and high frequencies. There are different types of wavelet basic functions, whose qualities vary according to several criteria. In this study, Haar wavelet, the simplest wavelet and one of the first studied, is used to achieve the separation of low and high frequencies. According to the analysis in Li’s study,20 we decompose the target image and reference image using a four-level wavelet transform. 3.2.IR-MAD AlgorithmAfter wavelet transforms, we used IR-MAD to select invariant pixels from the low-frequency components of target and reference images. IR-MAD is an effective method to select pixels with high no-change probability between images.3 For two -band images, acquired at times and , represent them by two vectors and . Two linear combinations are constructed for all spectral bands: where and are called the canonical variates; is the number of bands; and are constant vectors, maximizing the variance of .In this way, the difference image will show maximum change information, referred to as the MAD variates, where is number of bands: Assuming that no ground reflectance changes have occurred in two images of a scene, in such a case, the sum of the squares of the standardized MAD variates will approximately follows a chi-square distribution with degrees of freedom (): where represent the sum of the squares of the standardized MAD variates and is the variance of .The MAD variates associated with change observations, however, will deviate more or less strongly from such a multivariate normal distribution.38 Therefore, to improve the sensitivity of the MAD transformation, Nielsen et al.36 weight observations by the probability of no change though an iteration scheme: where is the weight, representing the probability of no change, and is the quantile of chi-square distribution.Iterations are continued until the largest absolute change in the canonical correlations is smaller than a preset small value, e.g., .36 For radiometric normalization purposes, we can select all pixels that satisfy , where is a decision threshold, typically 95%.36 The invariant pixels will be used to estimate the normalization coefficients though regression fit and then normalize the target image to the reference image. 3.3.Relative Radiometric NormalizationIn this paper, we apply the IR-MAD algorithm to normalize the low frequency component of the target image after separation of high frequency and low frequency components by using wavelet transform. For a specific band, assume that is the image to be normalized and is the reference image. The specific steps of relative radiometric normalization based on wavelet transforms and IR-MAD are as follows:
4.Experiments and Analysis4.1.Relative Radiometric NormalizationIn order to verify the proposed method WIRMAD for relative radiometric normalization, it was compared with the HM, MS, IR-MAD, and WLPF methods. The results are shown in Fig. 2. The radiation differences between RRN results, target image, and reference image were calculated, and Fig. 2 depicts the results. Visual inspections of Figs. 2Fig. 3–4 show that the HM, MS, IR-MAD, WLPF, and WIRMAD methods significantly reduce the radiation difference between the target and reference images. The overall brightness and color of RRN results are similar with the reference image and the radiometric consistency is significantly improved compared with the target image in Figs. 2(a)–2(g). The differences between RRN results and the reference image demonstrate that the results of WLPF and WIRMAD are more consistent with the reference image than that other methods, as shown in Figs. 2(h)–2(m). The differences between RRN results and the reference image demonstrate that the results of WLPF and WIRMAD are more consistent with the reference image than that other methods, as shown in Figs. 2(h)–2(m). The result of WIRMAD method, however, was more consistent with the reference image, a result not evident through a visual inspection. To quantitatively evaluate and compare our WIRMAD method and the HM, MS, IR-MAD, and WLPF methods, the mean, standard deviation, and correlation between the resulting images, the target image and reference image were calculated to assess the performance of these methods, as shown in Table 1. As shown in Table 2, from the mean value, we can see that the traditional HM (103.0872), MS (102.5441), and IR-MAD (103.1381) methods yielded results closer to the reference image as compared with WLPF (95.8877) and WIRMAD (93.6330). The standard deviation of the results derived by WLPF (14.5668) and WIRMAD (14.4600) methods, however, is closer to the target image than that of HM (19.0636), MS (19.0603), and IR-MAD (18.3545) methods, which indicates that method based on wavelet transform retains more of the texture information in the original image. Table 2Comparison using statistical parameters of RRN results in Fig. 2.
In terms of correlation, as compared with HM, MS, and IR-MAD methods, the results obtained by WLPF and WIRMAD methods show lower correlation, 0.8218 and 0.8070, respectively, with the target image and higher correlation, 0.5201 and 0.5849 with the reference image, especially the proposed WIRMAD method (0.5849), which makes WIRMAD particularly advantageous in change detection applications. 4.2.Change DetectionChange detection experiments based on three pairs of bitemporal images at high and mid-high resolutions were carried out to assess further the proposed method. Three different change detection methods were used, object oriented and pixel based change vector analysis (CVA), PCA, and the iterated conditional model based on Markov random fields (ICM-MRF),45,46 to avoid the contingency caused by specific data and change detection methods. The RRN results of image pairs over Shenzhen are shown in Fig. 5. The pixel-based change detection results using CVA are also displayed in Fig. 5. It can be seen from Fig. 5, using the same CVA change detection method on pixel level, more accurate change detection results with radiometric correction are obtained as compared to the raw data without normalization. The change detection results from WLPF and WIRMAD normalization were significantly improved over conventional HM, MS, and IR-MAD methods. The change detection results were analyzed; the results are shown in Table 3. Omission, false alarm, overall accuracy, and kappa parameters were calculated to evaluate the accuracy of change detection based on RRN results derived by our WIRMAD method, HM, MS, IR-MAD, and WLPF methods. Table 3Evaluation of change detection accuracy.
In Table 3, we can see that the change detection results were similar regardless of the method. Change detection accuracy, however, was significantly increased with RRN as compared to the original image without normalization. The omission and false alarm rates decline, the overall accuracy and kappa coefficients rise. It can be concluded that RRN is crucial for change detection. After RRN using WLPF and our WIRMAD method, the change detection results are similar and effectively improved over those derived by conventional HM, MS, and IR-MAD methods. These results suggest that dividing low and high frequencies using wavelet transform can improve the change detection accuracy. Among the five RRN methods, our WIRMAD method derives change detection results with the lowest omissions and false alarms, and the highest overall accuracy and kappa coefficient, indicating that the proposed WIRMAD method can avoid reducing information about change and thus increase the accuracy of change detection. Pixel-based change detection experiments were conducted to assess the results of RRN of images over Wuhan, China. The RRN results using the proposed WIRMAD method and conventional methods are shown in Fig. 6, as well as the change detection results using PCA derived from these RRN results. It can be seen from Fig. 6, change detection on GF-1 WFV images with RRN produces more accurate results than those obtained on an original image without normalization, especially for false negative results [Figs. 6(i)–6(n)]. The change detection results derived from RRN using the proposed WIRMAD and WLPF methods are closer to the true change map than results obtained after RRN using conventional HM, MS, and IR-MAD methods. Table 4 shows the evaluation of change detection omission, false alarm, overall accuracy, and kappa parameter results for the two GF-1 WFV images. Table 4Evaluation of change detection accuracy.
As shown in Table 4, change detection results after RRN have much lower false alarm rate and higher coverall accuracy than raw images without radiometrical correction. Although the omission rate is slightly higher, RRN before change detection significantly improves the results. Change detection with RRN using the proposed WIRMAD method and WLPF method produced more accurate results than HM, MS, and IR-MAD methods to normalize images. The WIRMAD method, especially, had the highest overall accuracy (0.9395, 0.9156) and kappa coefficient (0.4244, 0.3718) when detecting change using PCA and ICM-MRF. Object oriented change detection, including CVA, PCA and ICM-MRF, were applied to assess the RRN results of images over Guangxi, China. Figure 7 displays the RRN results and ICM-MRF change detection results. The change detection results derived from RRN using the proposed WIRMAD method and WLPF method are closer to the true change map than results obtained after RRN using conventional HM, MS, and IR-MAD methods, as inferred from Fig. 5, since the omission rate is significantly lower. The change detection results for GF-2 PMS images were analyzed and displayed in Table 5. We calculated omission, false alarm, overall accuracy, and kappa parameters to evaluate the accuracy of change detection by our WIRMAD method, HM, MS, IR-MAD, and WLPF methods, after RRN. Table 5Evaluation of change detection accuracy.
As shown in Table 5, the omission, false alarm, overall accuracy, and kappa for the change detection results using three different methods (CVA, PCA, and ICM-MRF) show the same trend. After RRN, the omission and false alarm rates decreased, and the overall accuracy and kappa coefficient increased as the omission and kappa values improved significantly. This indicates that the radiation difference between multitemporal images must be reduced though RRN, before change detection. Our WIRMAD method produces change detection results at the highest overall accuracy and kappa, with the lowest value for omissions, suggesting that the separation of low and high frequencies of images contribute to increased change detection accuracy. 5.DiscussionIn this paper, we propose an RRN method based on wavelet transform and the IR-MAD algorithm. Wavelet transform separates the high and low frequency components while IR-MAD radiometrically normalizes the low frequency components of a target image. IR-MAD is a linear relative radiometric normalization method. However, even if the multitemporal images are very similar, it is impossible to have a complete linear relationship between images.28 Extracting low frequency components of images by wavelet transform eliminates the effects of nonlinear factors, such as texture and small changes in ground objects,42 exposing a higher linear correlation and gives full play to IR-MAD. The IR-MAD algorithm extracts invariant pixels from the low frequency components, linearly correcting the low frequencies of the target image to the low frequencies of the reference image. This protects the radiometric difference of changed objects in the low frequencies, therefore improving change detection results.8 The RRN results obtained by the proposed WIRMAD method were quite similar to those derived by the WLPF method and more consistent with the reference image than conventional RRN methods. The change detection results derived from RRN using the proposed WIRMAD were the closest to the true change with the highest overall accuracy and kappa coefficient, and lowest omission or false alarm rate, among the tested methods. Furthermore, it can be applied for pixel level or object level change detection regardless method. Moreover, except for change detection, it can be also applied for image dodging when mosaic images with overlapping,29 gap filling and bad line removing based on a referenced image, and time series analysis.6,19 However, there are limitations of the proposed method. It is not suitable for the multitemporal images with high nonlinear correlation since this method linearly normalize the low frequency of the target image. In addition, if the changed ground objects occupied a large proportion of the image, the normalized result will differ from the reference image visually. 6.ConclusionsMultitemporal images have radiation differences due to sensor and atmosphere conditions, even over the same area, creating challenges in multitemporal images processing and analysis. Conventional RRN methods, however, often reduce the difference caused by the change of the ground objects in the process of normalization. This negatively affects the change detection results and time series analysis. In order to solve this problem, we propose an RRN method based on the wavelet transform and IR-MAD algorithm. Wavelet transform is applied to separate the high frequency and low frequency components of both the target and reference images. We use the IR-MAD algorithm to normalize the low frequency component of the target image. Wavelet reverse transform is conducted to reconstruct the radiometrically normalized image. Experimental results show that our WIRMAD method can not only achieves radiometric consistency of the target and reference image but also improves the accuracy of change detection. WIRMAD method applies wavelet transform to preserve high frequency information. In addition, low frequency of the target image is normalized using unchanged pixels selected by the IR-MAD algorithm, thereby improving change detection accuracy, making it more suitable than other RRN methods for change detection at pixel or object level. AcknowledgmentsThis work was supported by the National Natural Science Foundation of China (NO. 41471354) and National Key Research and Development Program of China (NO. 2016YFB0502602). We would like to thank the China Center for Resources Satellite Data and Application for providing the GF-1 image data, and the USGS for providing Landsat-5 image data. The reviewer’s comments are valuable which help much to improve this manuscript, and their efforts are also greatly appreciated. The authors declare no conflict of interest. ReferencesP. Coppin et al.,
“Review Article: Digital change detection methods in ecosystem monitoring: a review,”
Int. J. Remote Sens., 25
(9), 1565
–1596
(2004). https://doi.org/10.1080/0143116031000101675 IJSEDK 0143-1161 Google Scholar
P. M. Teillet, K. Staenz and D. J. Williams,
“Effects of spectral, spatial, and radiometric characteristics on remote sensing vegetation indices of forested regions,”
Remote Sens. Environ., 61
(1), 139
–149
(1997). https://doi.org/10.1016/S0034-4257(96)00248-9 Google Scholar
S. Tuominen and A. Pekkarinen,
“Local radiometric correction of digital aerial photographs for multisource forest inventory,”
Remote Sens. Environ., 89
(1), 72
–82
(2004). https://doi.org/10.1016/j.rse.2003.10.005 Google Scholar
Y. Du, P. M. Teillet and J. Cihlar,
“Radiometric normalization of multitemporal high-resolution satellite images with quality control for land cover change detection,”
Remote Sens. Environ., 82
(1), 123
–134
(2002). https://doi.org/10.1016/S0034-4257(02)00029-9 Google Scholar
P. M. Teillet,
“Image correction for radiometric effects in remote sensing,”
Int. J. Remote Sens., 7
(12), 1637
–1651
(1986). https://doi.org/10.1080/01431168608948958 IJSEDK 0143-1161 Google Scholar
S. Vicenteserrano, F. Perezcabello and T. Lasanta,
“Assessment of radiometric correction techniques in analyzing vegetation variability and change using time series of Landsat images,”
Remote Sens. Environ., 112
(10), 3916
–3934
(2008). https://doi.org/10.1016/j.rse.2008.06.011 Google Scholar
L. Paolini et al.,
“Radiometric correction effects in Landsat multi‐date/multi‐sensor change detection studies,”
Int. J. Remote Sens., 27
(4), 685
–704
(2006). https://doi.org/10.1080/01431160500183057 IJSEDK 0143-1161 Google Scholar
X. Yang and C. P. Lo,
“Relative radiometric normalization performance for change detection from multi-date satellite images,”
Photogramm. Eng. Remote Sens., 66
(8), 967
–980
(2000). Google Scholar
Y. Ding and C. D. Elvidge,
“Comparison of relative radiometric normalization techniques,”
ISPRS J. Photogramm. Remote Sens., 51
(3), 117
–126
(1996). https://doi.org/10.1016/0924-2716(96)00018-4 IRSEE9 0924-2716 Google Scholar
M. M. Rahman et al.,
“A comparison of four relative radiometric normalization (RRN) techniques for mosaicing H-res multi-temporal thermal infrared (TIR) flight-lines of a complex urban scene,”
ISPRS J. Photogramm. Remote Sens., 106 82
–94
(2015). https://doi.org/10.1016/j.isprsjprs.2015.05.002 IRSEE9 0924-2716 Google Scholar
A. N. Bao et al.,
“Comparison of relative radiometric normalization methods using pseudo-invariant features for change detection studies in rural and urban landscapes,”
J. Appl. Remote Sens., 6
(10), 063578
(2012). https://doi.org/10.1117/1.JRS.6.063578 Google Scholar
Q. Xu, Z. Hou and T. Tokola,
“Relative radiometric correction of multi-temporal ALOS AVNIR-2 data for the estimation of forest attributes,”
ISPRS J. Photogramm. Remote Sens., 68 69
–78
(2012). https://doi.org/10.1016/j.isprsjprs.2011.12.008 IRSEE9 0924-2716 Google Scholar
C. Song et al.,
“Classification and change detection using Landsat TM data—when and how to correct atmospheric effects,”
Remote Sens. Environ., 75
(2), 230
–244
(2001). https://doi.org/10.1016/S0034-4257(00)00169-3 Google Scholar
Y. Chen et al.,
“Radiometric cross-calibration of GF-4 PMS sensor based on assimilation of landsat-8 OLI images,”
Remote Sens., 9
(8), 811
(2017). https://doi.org/10.3390/rs9080811 Google Scholar
J.-C. Padró et al.,
“Radiometric correction of simultaneously acquired landsat-7/landsat-8 and sentinel-2A imagery using pseudoinvariant areas (pIA): contributing to the Landsat time series legacy,”
Remote Sens., 9
(12), 1319
(2017). https://doi.org/10.3390/rs9121319 Google Scholar
J. Zhou et al.,
“Atmospheric correction of PROBA/CHRIS data in an urban environment,”
Int. J. Remote Sens., 32
(9), 2591
–2604
(2011). https://doi.org/10.1080/01431161003698443 IJSEDK 0143-1161 Google Scholar
F. G. Hall et al.,
“Radiometric rectification: toward a common radiometric response among multidate, multisensor images,”
Remote Sens. Environ., 35
(1), 11
–27
(1991). https://doi.org/10.1016/0034-4257(91)90062-B Google Scholar
X. Chen, L. Vierling and D. Deering,
“A simple and effective radiometric correction method to improve landscape change detection across sensors and across time,”
Remote Sens. Environ., 98
(1), 63
–79
(2005). https://doi.org/10.1016/j.rse.2005.05.021 Google Scholar
T. A. Schroeder et al.,
“Radiometric correction of multi-temporal Landsat data for characterization of early successional forest patterns in western Oregon,”
Remote Sens. Environ., 103
(1), 16
–26
(2006). https://doi.org/10.1016/j.rse.2006.03.008 Google Scholar
W. Li, K. Sun and H. Zhang,
“Algorithm for relative radiometric consistency process of remote sensing images based on object-oriented smoothing and contourlet transforms,”
J. Appl. Remote Sens., 8
(1), 083607
(2014). https://doi.org/10.1117/1.JRS.8.083607 Google Scholar
Jr. P. S. Chavez,
“An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data,”
Remote Sens. Environ., 24
(3), 459
–479
(1988). https://doi.org/10.1016/0034-4257(88)90019-3 Google Scholar
M. M. Rahman et al.,
“An assessment of polynomial regression techniques for the relative radiometric normalization (RRN) of high-resolution multi-temporal airborne thermal infrared (TIR) imagery,”
Remote Sens., 6
(12), 11810
–11828
(2014). https://doi.org/10.3390/rs61211810 Google Scholar
H. Olsson,
“Regression functions for multitemporal relative calibration of thematic mapper data over boreal forest,”
Remote Sens. Environ., 46
(1), 89
–102
(1993). https://doi.org/10.1016/0034-4257(93)90034-U Google Scholar
J. R. Schott, C. Salvaggio and W. J. Volchok,
“Radiometric scene normalization using pseudoinvariant features,”
Remote Sens. Environ., 26
(1), 1
–16
(1988). https://doi.org/10.1016/0034-4257(88)90116-2 Google Scholar
H. Zhou et al.,
“A new model for the automatic relative radiometric normalization of multiple images with pseudo-invariant features,”
Int. J. Remote Sens., 37
(19), 4554
–4573
(2016). https://doi.org/10.1080/01431161.2016.1213922 IJSEDK 0143-1161 Google Scholar
D. G. Hadjimitsis, C. R. I. Clayton and A. Retalis,
“The use of selected pseudo-invariant targets for the application of atmospheric correction in multi-temporal studies using satellite remotely sensed imagery,”
Int. J. Appl. Earth Obs. Geoinf., 11
(3), 192
–200
(2009). https://doi.org/10.1016/j.jag.2009.01.005 Google Scholar
C. D. Elvidge et al.,
“Relative radiometric normalization of Landsat multispectral scanner (MSS) data using an automatic scattergram-controlled regression,”
Photogramm. Eng. Remote Sens., 61
(10), 1255
–1260
(1995). Google Scholar
V. Sadeghi, H. Ebadi and F. F. Ahmadi,
“A new model for automatic normalization of multitemporal satellite images using artificial neural network and mathematical methods,”
Appl. Math. Modell., 37
(9), 6437
–6445
(2013). https://doi.org/10.1016/j.apm.2013.01.006 AMMODL 0307-904X Google Scholar
I. Olthof et al.,
“Landsat-7 ETM+ radiometric normalization comparison for northern mapping applications,”
Remote Sens. Environ., 95
(3), 388
–398
(2005). https://doi.org/10.1016/j.rse.2004.06.024 Google Scholar
H. U. Changmiao et al.,
“Landsat TM/ETM+ and HJ-1A/B CCD data automatic relative radiometric normalization and accuracy verification,”
J. Remote Sens., 18
(2), 267
–286
(2014). https://doi.org/10.11834/jrs.20143225 Google Scholar
L. Zhang, C. Wu and B. Du,
“Automatic radiometric normalization for multitemporal remote sensing imagery with iterative slow feature analysis,”
IEEE Trans. Geosci. Remote Sens., 52
(10), 6141
–6155
(2014). https://doi.org/10.1109/TGRS.2013.2295263 IGRSD2 0196-2892 Google Scholar
D. P. Roy et al.,
“Multi-temporal MODIS-Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data,”
Remote Sens. Environ., 112
(6), 3112
–3130
(2008). https://doi.org/10.1016/j.rse.2008.03.009 Google Scholar
A. Langner et al.,
“Spectral normalization of spot 4 data to adjust for changing leaf phenology within seasonal forests in Cambodia,”
Remote Sens. Environ., 143
(5), 122
–130
(2014). https://doi.org/10.1016/j.rse.2013.12.012 Google Scholar
M. C. Hansen et al.,
“A method for integrating MODIS and Landsat data for systematic monitoring of forest cover and change in the Congo basin,”
Remote Sens. Environ., 112
(5), 2495
–2513
(2008). https://doi.org/10.1016/j.rse.2007.11.012 Google Scholar
M. J. Canty, A. A. Nielsen and M. Schmidt,
“Automatic radiometric normalization of multitemporal satellite imagery,”
Remote Sens. Environ., 91
(3–4), 441
–451
(2004). https://doi.org/10.1016/j.rse.2003.10.024 Google Scholar
A. A. Nielsen,
“The regularized iteratively reweighted mad method for change detection in multi- and hyperspectral data,”
IEEE Trans. Image Process., 16
(2), 463
–478
(2007). https://doi.org/10.1109/TIP.2006.888195 IIPRE4 1057-7149 Google Scholar
C. J. B. Mateos et al.,
“Relative radiometric normalization of multitemporal images,”
Int. J. Interact. Multimedia Artif. Intell., 1
(3), 53
–58
(2010). https://doi.org/10.9781/ijimai.2010.139 Google Scholar
M. J. Canty and A. A. Nielsen,
“Automatic radiometric normalization of multitemporal satellite imagery with the iteratively re-weighted MAD transformation,”
Remote Sens. Environ., 112
(3), 1025
–1036
(2008). https://doi.org/10.1016/j.rse.2007.07.013 Google Scholar
M. Vetterli and C. Herley,
“Wavelets and filter banks: theory and design,”
IEEE Trans. Signal Process., 40
(9), 2207
–2232
(1992). https://doi.org/10.1109/78.157221 ITPRED 1053-587X Google Scholar
A. R. Tee et al.,
“Haze detection and removal in high resolution satellite image with wavelet analysis,”
IEEE Trans. Geosci. Remote Sens., 40
(1), 210
–217
(2002). https://doi.org/10.1109/36.981363 IGRSD2 0196-2892 Google Scholar
R. H. Bamberger and M. J. T. Smith,
“A filter bank for the directional decomposition of images: theory and design,”
IEEE Trans. Signal Process., 40
(4), 882
–893
(1992). https://doi.org/10.1109/78.127960 ITPRED 1053-587X Google Scholar
S. G. Biday and U. Bhosle,
“Relative radiometric correction of multitemporal satellite imagery using Fourier and wavelet transform,”
J. Indian Soc. Remote Sens., 40
(2), 201
–213
(2012). https://doi.org/10.1007/s12524-011-0155-6 Google Scholar
K. Sun et al.,
“A new relative radiometric consistency processing method for change detection based on wavelet transform and a low-pass filter,”
Sci. China Technol. Sci., 53
(S1), 7
–14
(2010). https://doi.org/10.1007/s11431-010-3197-z Google Scholar
S. G. Chang, B. Yu and M. Vetterli,
“Adaptive wavelet thresholding for image denoising and compression,”
IEEE Trans. Image Process., 9
(9), 1532
–1546
(2000). https://doi.org/10.1109/83.862633 IIPRE4 1057-7149 Google Scholar
A. Singh,
“Review article digital change detection techniques using remotely-sensed data,”
Int. J. Remote Sens., 10
(6), 989
–1003
(1989). https://doi.org/10.1080/01431168908903939 IJSEDK 0143-1161 Google Scholar
I. Molina et al.,
“Evaluation of a change detection methodology by means of binary thresholding algorithms and informational fusion processes,”
Sensors, 12
(3), 3528
–3561
(2012). https://doi.org/10.3390/s120303528 SNSRES 0746-9462 Google Scholar
BiographyYepei Chen received her BS degree in GIS from Hubei University, Wuhan, China, in 2015. She is currently pursuing her MS and PhD degrees in photogrammetry and remote sensing at the School of State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University. Her research interests include radiometric normalization, change detection, and time series analysis. Kaimin Sun received his BS, MS, and PhD degrees in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 1999, 2004, and 2008, respectively. He is currently an associate professor in the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University. His research interests include photogrammetry, object-oriented image analysis, and image change detection. Deren Li received his PhD in photogrammetry and remote sensing from the University of Stuttgart, Stuttgart, Germany, in 1984. Currently, he is a PhD supervisor with the State Key Laboratory of Information Engineering in Mapping, Surveying, and Remote Sensing, Wuhan University, China. He is also an academician of the Chinese Academy of Sciences, the Chinese Academy of Engineering, and the Euro-Asia International Academy of Sciences. His research interests are spatial information science and technology represented by RS, GPS, and GIS. Ting Bai received her BS degree in GIS from Huazhong Agricultural University, Wuhan, China, in 2014. She is currently pursuing her MS and PhD degrees in photogrammetry and remote sensing with the School of State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University. Her current research interests include remote sensing and feature fusion, machine learning, ensemble learning, land use, and land cover changes analysis of long-time series. Wenzhuo Li received his BS and MS degrees in photogrammetry and remote sensing from Wuhan University, Wuhan, China, in 2011 and 2013, respectively. He is currently pursuing his PhD in photogrammetry and remote sensing with the School of Remote Sensing and Information Engineering, Wuhan University. His current research interests include image segmentation, image classification, land use and land cover changes detection, and object-oriented image analysis. |