Open Access
8 July 2021 Extrapolating shortwave geostationary satellite imagery of clouds into nighttime using longwave observations
Allyson Rugg, Julie Haggerty, Daniel Adriaansen, William L. Smith
Author Affiliations +
Abstract

The lack of shortwave (SW, visible, and near-infrared) geostationary satellite data at night results in degradation of many weather forecasts and real-time diagnostic products. We present a method to extrapolate SW GOES-16 advanced baseline imager data through night using nighttime longwave (LW, infrared) observations and the relationships between LW and SW data observed during the previous day. The method is not a forecast since it requires LW nighttime observations but can provide continuity through day, night, and satellite terminator hours. To provide performance statistics, the algorithm is applied during the day so the SW extrapolations can be compared to observations. Typical mean absolute errors (MAEs) range from 1.0% to 12.7% reflectance depending on the SW channel. These MAEs can be predicted using a diagnostic metric called 0-h MAE which quantifies the quality of the algorithm’s input data. In addition to quantitative error statistics, three case studies are presented, including an animation of extrapolated imagery from dusk through dawn. Considerations for future improvements include use of convolutional neural networks and/or object-based extrapolations where mesoscale features are extrapolated individually.

1.

Introduction

Many weather forecasts and diagnostic products utilize geostationary satellite data, so the absence of shortwave (SW, visible, and near infrared) observations at night limits the performance of such forecasts and products.16 SW observations are only available during the day because SW radiation originates from the Sun and is reflected or scattered by the atmosphere, clouds, and other surfaces before reaching the satellite. In contrast, longwave (LW, infrared) observations are available at all hours because the radiation is emitted from the atmosphere, clouds, and surfaces directly. Despite the availability of LW observations, it can be more difficult to infer cloud properties such as phase and particle size without the scattering and reflective properties captured in SW data.5,6 This paper presents methodology to extrapolate SW observations of clouds through the night using the relationships between LW brightness temperatures and SW reflectance observed during the previous day. The algorithm extrapolates SW observations into the night but is not a forecast since the algorithm requires LW observations from the extrapolation valid time as well as from the previous day. Along with details on the algorithm, this paper provides a quantitative error analysis of the SW extrapolations and three case studies illustrating the strengths and limitations of the method. The data used are from the GOES-16 advanced baseline imager7 (ABI) over the contiguous United States (CONUS), but the methods presented could be applied to similar data (e.g., from the ABI on GOES-17 or the Advanced Himawari Imager8 on Himawari-8).

SW extrapolations could simplify algorithm development for weather products since many products rely on two or three separate satellite modules for day, night, and satellite terminator hours.13 By applying the extrapolation method during night and terminator hours, a single-satellite module could be applied using either observations or extrapolations depending on the solar zenith angle (SZA).

Imagery and animations of SW extrapolations could also be beneficial to human forecasters and scientists. One problem in hurricane monitoring and forecasting is a phenomenon known as “sunrise surprise,” which occurs when the first morning SW imagery becomes available and the observed low-level circulation is different from the previous evening.9 It is generally easier to see low-level circulation in SW imagery because thin, upper-level cirrus clouds are more transparent at these wavelengths, allowing the lower clouds to show through.10 This problem can be partially mitigated using the Day/Night Band (DNB) on the visible infrared imaging radiometer suite (VIIRS) because the instrument can detect very small amounts of SW radiation reflected off clouds from starlight, nightglow, and artificial (e.g., city) lights.10 VIIRS flies on low-Earth orbiting (LEO) satellites, however, so it cannot provide continuous monitoring of the circulation such as GOES-16 and other geostationary satellites can. The future Geostationary Extended Observations satellite may include DNB imagery but not until the 2030s.11 In the meantime, nighttime extrapolations of GOES-16 SW imagery may be a valuable and complementary resource for monitoring storms through the night.

The extrapolation algorithm is adapted from the previous work downscaling and extrapolating LEO sounder data using imager data from LEO and geostationary satellites. LEO imagers have high spatial resolution but few spectral channels (about 1 km and 10 channels), whereas sounders have low spatial resolution but numerous spectral channels (about 10 km and 1000 channels). By assuming the relationships between the properties observed by the sounder and imager are scale-invariant, high-resolution imagery can be constructed for sounder-only channels.12,13 Assuming that the relationships are also time-invariant allows the sounder data to be extrapolated forward and backward in time with the help of 5-min GOES-16 ABI scans.14,15 The method presented herein is similar but instead of using ABI data to extrapolate sounder data, and we use the LW subset of ABI channels to extrapolate the SW channels through the night.

In Sec. 2, we describe the GOES-16 ABI data and preprocessing used. The extrapolation algorithm and its optimization are described in Sec. 3, along with the methods used in evaluation. A quantitative error analysis is presented in Sec. 4, including a method to predict the magnitude of extrapolation errors using the training (daytime) data. Three case studies are shown in Sec. 5. A discussion of results is presented in Sec. 6, including suggestions for future research and a description of similar methods being developed to improve satellite cloud and radiative flux analyses at night for the NASA Clouds and the Earth’s Radiant Energy System (CERES).1618 A summary and concluding remarks are given in Sec. 7.

2.

Data and Preprocessing

2.1.

GOES-16 Observations

The GOES-16 ABI has 9 LW channels (8 through 16) that capture only terrestrial emissions and 6 SW channels (1 through 6) that capture only reflected and scattered solar radiation. All 9 of these terrestrial-only LW channels are considered for use in the extrapolation algorithm and all 6 solar-only SW channels are extrapolated. We neither use nor extrapolate ABI channel 7 (i.e., the SW window band) since both reflected solar and emitted terrestrial radiation contribute to the signal, but possible future applications to this channel are considered in Sec. 6.6.2. The central wavelengths for all 16 ABI channels are listed in Table 1 along with common nicknames and brief descriptions of each channel’s application(s).7,1935

Table 1

List of ABI channels, their central wavelengths, common nicknames, applications, and use in the final extrapolation algorithm.7,19–35

Channel (band) number7Central wavelength (μm)7Common nickname19Common application(s)Use in algorithm
10.47BlueAerosols, dust, haze, and smoke19,20SW training
20.64RedClouds and surface features19SW training
30.86VeggieLand surface and vegetation19SW training
41.37CirrusThin cirrus, high clouds19SW training
51.6Snow/iceCloud phase and fire hotspots19,21SW training
62.2Cloud particle sizeCloud phase and fire temperature19SW training
73.9SW windowFog, low clouds, and fire temperature19,22,23None
86.2Upper-level water vaporUpper tropospheric winds, jet streaks, temperature, and moisture profiles19,2428None
96.9Mid-level water vaporMountain/gravity waves, temperature, and moisture profiles19,2427None
107.3Lower-level water vaporStratospheric intrusions, lake effect snow, temperature, and moisture profiles19,2430None
118.4Cloud-top phaseCloud phase19LW input
129.6OzoneDynamics near the tropopause and upper jet streaks19None
1310.3Clean LW windowAtmospheric moisture cloud top temperature, and particle sizes19,31LW input
1411.2LW WindowAtmospheric moisture, cloud top temperature, particle sizes, and cloud top phase19,25,28LW Input
1512.3Dirty LW windowAtmospheric moisture, cloud top temperature, particle sizes, and cloud top phase19,29LW input
1613.3CO2 LWTropopause and cloud top heights and derived winds19,3235LW input

Since this work is primarily motivated by inferring and visualizing cloud properties in SW data, we do not attempt to extrapolate SW reflectance in clear sky conditions. The extrapolation method presented could be applied to clear sky conditions but would likely have different optimal parameters and error characteristics than those presented herein. The operational level 2 binary clear sky mask is used to isolate cloudy pixels in both observations and extrapolations.4

The data used are mostly from CONUS scans, which are cropped to latitudes between 24°N and 50°N and longitudes between 125°W and 66°W for computational efficiency. For one case study shown in Sec. 5, mesoscale scans are used instead of CONUS scans to generate an animation because they provide data every minute, as opposed to every 5 min.

Since only LW observations are available at night and all LW channels have a 2-km resolution (at nadir), SW extrapolations also have a 2-km resolution. Four SW channels have native resolutions higher than 2 km: channels 1, 3, and 5 have 1-km resolutions and channel 2 has a 0.5-km resolution. Observations from these high-resolution SW channels are averaged onto the 2-km resolution of all the other ABI channels. The bidirectional reflectance (hereafter simply “reflectance”) from SW channels is normalized to overhead Sun by divided by the cosine of the SZA.

Except for one case study, all of the data used are from daytime hours so the extrapolations can be compared to SW observations. SW observations are impacted by the position of the Sun, but LW observations are not. As a result, SW extrapolations are dependent on the illumination conditions present in the training SW observations. The times of day used for training the extrapolation algorithm are chosen carefully to account for this dependency. Impacts of illumination and viewing geometry are demonstrated and discussed in Secs. 4 and 6.2, respectively.

GOES-16 ABI CONUS scans from August 6, 2018, to August 5, 2019, at 1500, 1700, 1800, and 2100 UTC are used to produce the quantitative error analysis presented in Sec. 4. The precise times of observations are all within 10 min of the top of the hour. The subsolar points at 1500, 1700, 1800, and 2100 UTC on the 2018 autumnal equinox are shown in Fig. 1 along with the GOES-16 subsatellite point (0°N, 75.2°W). 1700 UTC is used because it is approximately solar noon for the GOES-16 satellite; as a result, shadows in observed SW imagery should be minimized at this time of day. 1800 UTC is approximately solar noon over the central CONUS, therefore, providing the most consistent illumination across the domain. The 1500 and 2100 UTC times are used because they correspond to morning and afternoon, respectively, but are still daytime for most of the CONUS on the winter solstice. 2100 UTC is particularly unique because the Sun is farther from the GOES-16 satellite than during the other times (Fig. 1). This results in a high Sun-satellite relative azimuth angle (RAA) (>60  deg).

Fig. 1

Map of the Western Hemisphere with the subsatellite point for GOES-16 marked (blue diamond), along with the subsolar points for 1500, 1700, 1800, and 2100 UTC (yellow stars) on the 2018 autumnal equinox (22 September).

JARS_15_3_038501_f001.png

Because part of the CONUS is dark at 1500 and 2100 UTC in the winter, we discard SW observations where the SZA exceeds 82 deg. The threshold of 82 deg was chosen because it is used to distinguish between day/twilight and night for the GOES-16 level 2 cloud optical and microphysical products.5 For one nighttime case study, observed channel 2 reflectance is shown for SZA above 82 deg to demonstrate the transition through day, the terminator, and night. The high SZA observations in this case are used for visualization only, no such data are used to produce or evaluate the extrapolations.

2.2.

VIIRS DNB Observations

Nighttime SW (0.705  μm) observed radiance from the VIIRS DNB is shown for one case study in Sec. 5.3. Nighttime SW observations from the VIIRS DNB are possible given the sensor’s high sensitivity in lowlight environments and the illumination of clouds by moonlight, starlight, and/or anthropogenic sources.10,36 The radiance data used are from the 500-m resolution Daily At-sensor Top-of-Atmosphere Nighttime Lights Product (VNP46A1) which is publicly available from the level-1 and Atmosphere Archive and Distribution System, Distributed Active Archive Center.36 Radiance, as opposed to reflectance, is shown for the VIIRS DNB since it is difficult to quantify the amount of incoming radiance, and therefore the fraction that is reflected.

3.

Methods

3.1.

Extrapolation Algorithm

Figure 2 shows a schematic for the algorithm that extrapolates daytime SW data from some training time out to an extrapolation time when LW observations are already available. Note the algorithm is not a forecast since the LW observations from both the training and extrapolation times are required. For the first step in the algorithm, a single SW extrapolation pixel is chosen and denoted (x,y). The LW brightness temperatures at (x,y) are then compared to the LW brightness temperatures at every training pixel using a cost function. In step two, the three training pixels with the smallest cost are identified using a k-d tree, which is simply a computationally efficient way to find the minima of a large set.37 These three training pixels are denoted (x1,y1), (x2,y2), and (x3,y3). In the third step, the SW reflectance from the training pixels (x1,y1), (x2,y2), and (x3,y3) is averaged for each SW channel. Those averages then become the extrapolated SW reflectance for the extrapolation pixel (x,y). This process (steps one through three) is repeated for all extrapolation pixels.

Fig. 2

Illustration of the extrapolation algorithm.

JARS_15_3_038501_f002.png

Figure 2 shows the process using only three training pixels per extrapolation pixel, but in practice, this number denoted N is a tunable parameter. Also not shown in Fig. 2 are two channel 13 (10.3  μm) gradient terms which create more realistic texture and shadows in the extrapolated images using spatial context. Horizontal gradients in channel 13 brightness temperature are used as a proxy for gradients in cloud top height which impact cloud texture in SW observations. The channel 13 gradients at a pixel (x,y) are defined as

Eq. (1)

δB13(x,y)δy=B13(x,y+1)B13(x,y1),

Eq. (2)

δB13(x,y)δx=B13(x+1,y)B13(x1,y),
where B13 is the brightness temperature of channel 13 at a given pixel.

Three functional forms for the cost function were considered: the city block, Euclidean, and Chebychev forms, defined, respectively, as

Fcityblock(x,y,x,y)=c=816|Bc,ex(x,y)Bc,tr(x,y)|+|δB13,ex(x,y)δyδB13,tr(x,y)δy|+|δB13,ex(x,y)δxδB13,tr(x,y)δx|,FEuclidean(x,y,x,y)=c=816(Bc,ex(x,y)Bc,tr(x,y))2+(δB13,ex(x,y)δyδB13,tr(x,y)δy)2+(δB13,ex(x,y)δxδB13,tr(x,y)δx)2,FChebychev(x,y,x,y)=max(|Bc,ex(x,y)Bc,tr(x,y)|,|δB13,ex(x,y)δyδB13,tr(x,y)δy|,|δB13,ex(x,y)δxδB13,tr(x,y)δx|),
where Bc is the brightness temperature of channel c and subscripts tr and ex indicate the training and the extrapolation times, respectively. The indices (x,y) refer to the location of the extrapolation pixel, whereas (x,y) refer to the location of the training pixel being evaluated for a possible match, as illustrated in Fig. 2.

Different values of N and the three cost functions above were evaluated using 24-h extrapolations from 1800 to 1800 UTC the following day. Twenty-four-hour extrapolations were used so the Sun positions at both the training and extrapolation times were nearly identical. This is necessary because LW channels are unaffected by Sun position and therefore cannot be used to extrapolate changes (e.g., shadows) in SW channels due to movement of the Sun. Extrapolations beginning and ending at 1800 UTC in particular were used because 1800 UTC is approximately solar noon over the central CONUS, therefore providing the most uniform illumination across the domain.

The root-mean-squared error (RMSE) of all pixels in all 6 SW channels was used to compare the extrapolations to observations from the extrapolation time. RMSE is defined as

Eq. (3)

RMSE=16npc=16[pixels(Ac,exAc,obs)2],
where Ac is the reflectance of SW channel c, subscripts ex and obs indicate the extrapolated and observed values valid at the extrapolation time, and np is the number of cloudy pixels in the scene (clear sky pixels are not extrapolated and are therefore omitted from the calculation).

Algorithm parameters are optimized by systematically adjusting them one at a time and comparing RMSE from 30 randomly selected 24-h extrapolations valid at 1800 UTC. Using 30 randomly selected times reduces processing time and leaves a larger independent dataset for evaluation. All three cost functions are tested on the 30 24-h 1800 UTC extrapolations using N values of 1, 10, 20, 30, 40, 50, and 60. The city block function with N=50 produced the smallest RMSE (not shown), so this configuration was used for the next step in algorithm development: optimization of LW input channels (Bc).

Systematically removing LW input channels one at a time from the cost function reveals that some channels are detrimental to the extrapolation. Removing LW channels 8, 9, 10, and 12 (i.e., the water vapor and ozone bands) from the cost function improves results: RMSE for the 30 24-h extrapolations decreases from 12.1% to 11.4% reflectance when all four of these channels are omitted. Of the 5 remaining LW channels (11 and 13 to 16), further removal of channel 16 (i.e., the CO2 band) has the largest impact on RMSE, increasing it from 11.4% to 12.2% reflectance. Removing channel 11 (i.e., the cloud top phase band) has the second largest impact, increasing RMSE from 11.4% to 11.9% reflectance.

Omitting both channel 13 gradient terms decreases RMSE from 11.4% to 11.3% reflectance, but we believe the texture improvement to extrapolated images justifies their inclusion. For example, Fig. 3 shows channel 2 (i.e., the red visible band) observations [rescaled to 2 km (a)] and extrapolations with (b) and without (c) the channel 13 gradient terms for some clouds in the northwest CONUS on 1800 UTC on February 27, 2019 (clear sky pixels appear black because they are not extrapolated). The RMSE for the extrapolation with the channel 13 gradient terms is slightly larger, at 20.0% as opposed to 19.1% reflectance. Though the errors on a pixel-by-pixel level are slightly larger, the shadows and bubbly texture of the observations is only captured if the gradients are included, especially in the areas marked in red in Fig. 3.

Fig. 3

Observed ABI channel 2 (0.64  μm) reflectance (a) and 24-h extrapolated ABI channel 2 (0.64  μm) reflectance including (b) and omitting (c) the ABI channel 13 (10.3  μm) gradient terms. All images are valid at 1800 UTC on February 27, 2019. Clear sky pixels appear black and red outlines highlight 3 areas where the ABI channel 13 (10.3  μm) gradient terms resulted in much more realistic cloud texture.

JARS_15_3_038501_f003.png

A detailed analysis of local texture is beyond the scope of this paper, but the overall level of variability within each of the images in Fig. 3 can be quantified using the concept of entropy from information theory. To compute the entropy of the images, the reflectance is converted to an 8-bit unsigned integer (by rounding), and the distribution of the resulting integers is computed (omitting clear sky pixels). Then the entropy is defined as

entropy=i=0255Pilog2(Pi),
where Pi is the proportion of pixels with a rounded value of i.38,39 Note that while the summation goes to 255, no reflectance values reach this value (though reflectance can exceed 100% due to anisotropic scattering). The entropy of the observed image [Fig. 3(a)] is 5.738 and the entropies of the extrapolated images with and without the channel 13 gradient terms are 5.735 and 5.533 (99.9% and 96.4% of the observed entropy), respectively.

When channel 13 gradient terms are used, the similar-to-observed entropy results in more realistic texture which could be advantageous for human forecasters and the visualization of extrapolated images. For this reason, the most of the results shown in Secs. 4 and 5 use the channel 13 gradient terms. In some places, results omitting these gradient terms are also shown in case users choose to omit them in favor of smaller errors. There are other obvious differences between the observed and extrapolated imagery in Fig. 3, such as the darker-than-observed reflectance in the northeast portion of both extrapolations. The cause of this is discussed in Sec. 5.2 where a deeper analysis of this same case is presented.

The final extrapolation algorithm can be expressed mathematically as

Eq. (4)

Ac,ex(x,y)=150k=150Ac,tr(xk,yk);c=[1,2,4,5,6],
where (xk,yk) are the sorted indices of Ac,tr such that
F(x,y,x1,y1)F(x,y,x2,y2)F(x,y,x3,y3),
and F is the city block cost function, defined as

Eq. (5)

Fcityblock(x,y,x,y)=|Bc,ex(x,y)Bc,tr(x,y)|+|δB13,ex(x,y)δyδB13,tr(x,y)δy|+|δB13,ex(x,y)δxδB13,tr(x,y)δx|,
for c=[11,13,14,15,16].

Several additional algorithmic adjustments were considered but did not improve results. Such methods involved: including brightness temperature differences, including surface reflectance information, normalizing each LW brightness temperature by the range or standard deviation of the training data, and including a measure of geographic distance to favor matching training and extrapolations pixels physically near one another.

3.2.

Evaluation Methods

Though the algorithm was developed using RMSE, the mean absolute error (MAE) is used when discussing error statistics and case studies in the following sections. MAE is used instead of RMSE to avoid evaluating the algorithm with the same metric on which it was developed. MAE is also more easily interpreted since it is a simple mean, rather than weighted towards higher errors like RMSE. MAE for the SW channel c is defined as

Eq. (6)

MAEc=1nppixels|Ac,exAc,obs|,
where np is the number of cloudy pixels in the scene (clear sky pixels are omitted). In some situations, we discuss the mean or a percentile of MAE for a given group of extrapolations (e.g., a month of the year or time of day). In such cases, MAE is computed as shown above for each individual extrapolation time and SW channel. The mean MAE of a channel is then the mean of MAEs for each extrapolation time and similar for percentiles of MAE.

To assess the quality of the training data, we compute a 0-h MAE for each SW channel. A 0-h MAE is the result of applying the algorithm with the same extrapolation and training times so the extrapolation and training LW observations are identical. The 0-h extrapolated SW reflectance at a pixel (x,y) is, therefore, the result of averaging the original observed SW reflectance at (x,y) with 49 other SW pixels with similar LW characteristics (as determined by the cost function, F). 0-h MAE measures how much the SW reflectance for a given channel changes when averaged with 49 other pixels that are similar in the LW. If 0-h MAE is high, it means similar LW brightness temperatures and channel 13 gradients are associated with a wide variety of reflectance values for that SW channel. As demonstrated in Sec. 4, 0-h MAEs of training data are highly correlated with the MAEs of the resulting extrapolations. 0-h MAE can, therefore, be used to optimize the choice of training data.

4.

Quantitative Error Analysis

Figure 4 shows the mean MAE for all 24-h 1800 UTC extrapolations in a given month. These monthly MAEs display a seasonal pattern with all channels having the largest MAEs in February and lowest MAEs in June or July. Seasonal amplitudes range from 0.4% to 4.0% reflectance for channels 4 and 1, respectively. Channel 3 (4) has the highest (lowest) MAE during all months, ranging from 10.6% to 12.7% reflectance (1.0% to 1.5% reflectance) in December (July) and February (January), respectively. The difference in error magnitudes for different channels can be explained by the distribution of actual reflectance values. Channels 1, 2, and 3 (i.e., the blue, red, and veggie bands, respectively) exhibit the widest range of reflectance (not shown) and therefore have the potential for larger differences between observed and extrapolated reflectance. In contrast, channel 4 (i.e., the cirrus band) is much darker because strong water vapor absorption at this wavelength attenuates the signal above cloud top.19 The smaller variance in channel 4 reflectance compared to other channels makes it easier to extrapolate accurately. That channel 4 is most sensitive to high clouds also makes it easier to extrapolate since the height of clouds has a strong correlation with LW brightness temperatures.19

Fig. 4

Monthly MAE for each SW channel for 24-h, 1800 UTC extrapolations.

JARS_15_3_038501_f004.png

Figure 4 shows only results from 1800 UTC 24-h extrapolations, but MAE also varies by time of day. Boxplots in Fig. 5 summarize the distribution of all 24-h MAEs for extrapolations from 1500, 1700, 1800, and 2100 UTC. MAEs for 2100 UTC extrapolations are highest, with median values 0.5% to 4.0% reflectance higher than other times. Median MAEs for 1700 UTC are also up to 1.0% reflectance higher than those for 1500 and 1800 UTC, which have similar medians (within 0.3% reflectance for all channels).

Fig. 5

24-h MAE by hour of the day for each SW channel. Red lines indicate the median, blue boxes indicate the 25th and 75th percentiles, and the dashed lines extend to the minimum and maximum values.

JARS_15_3_038501_f005.png

24-h MAE varies depending on the channel, month, and time of day but is highly correlated with the 0-h MAE of the training data for the same channel. The Pearson correlation coefficient between the 24- and 0-h MAEs is 0.97 even when all channels, months, and times of day are considered. The ability to predict extrapolation MAE from 0-h MAE could give users a quantitative measure of confidence in the extrapolations and could also be used to optimize the choice of training data. To provide such a prediction, we used a least-squares linear regression and the assumption that a perfect training image (0-h MAE=0) would result in a perfect extrapolation (24-h MAE=0). The resulting formula for predicting 24-h MAE (MAE24) given the 0-h MAE (MAE0) of the same channel c is

MAEc,24=1.29MAEc,0.

The 0- and 24-h MAE pairs used to obtain this relationship are shown in Fig. 6(a) along with the regression line (solid) and dashed lines indicating a deviation of 25% from the regression. Of all 24-h MAEs, 94.4% are within 25% of the predicted value and 44.9% (55.1%) lie above (below) the regression line.

Fig. 6

(a) 24-h MAE as a function of 0-h MAE for all 6 SW GOES-16 channels and valid times of 1500, 1700, 1800, and 2100 UTC. Solid line and equation show the least-squares line of best fit, and dashed lines indicate where 24-h MAE is within 25% of the regression. (b) Three-dimensional surface plot showing the generalized function relating extrapolation MAE to 0-h MAE and extrapolation length (L).

JARS_15_3_038501_f006.png

The regression for 24-h MAEs can be generalized to any extrapolation length L (in hours) as

Eq. (7)

MAEc,L=MAEc,0(1+0.0119L).

This general form represents a linear relationship between the 0-h and extrapolation MAE where the slope of the line increases with extrapolation length and is shown in Fig. 6(b). The increase in slope (associated with the 0.0119×L×MAEc,0 term) with extrapolation length is the result of the relationships between LW and SW observation changing over time as clouds evolve. If the channel 13 gradient terms are omitted from the cost function (for both the extrapolation and 0-h MAE), the resulting relationship is

Eq. (8)

MAEc,L=MAEc,0(1+0.0154L).

We use only 24-h extrapolations in these regressions to control for differences in Sun-satellite geometry in the training and verification SW observations. This is necessary because SW observations are dependent on the position of the Sun but LW observations are not. The channel 13 gradient terms help reproduce shadows in the extrapolations, but the placement of those shadows is still dependent on the Sun position at the training time. Comparisons of SW extrapolations and observations valid at times of day other than the training time contain an error term associated with the change in Sun-satellite geometry that would not apply to nighttime extrapolations. With this in mind, Fig. 7 shows the extrapolated MAE as a function of 0-h MAE for different extrapolation lengths. Red dots indicate that the training and/or extrapolation time is 2100 UTC, whereas blue dots represent all other extrapolations. Extrapolations involving 2100 UTC are highlighted because the RAA is large (>60  deg) (Fig. 1) and extrapolation errors tend to be high (Fig. 5). As in Fig. 6(a), the solid lines in Fig. 7 represent the predicted MAE as calculated using Eq. (7) and the dashed lines indicate a deviation of 25% from the prediction. Table 2 summarizes the training and extrapolation times included for each extrapolation length, the correlation between 0-h training MAE and extrapolation MAE, the percent of points above and below the solid prediction line, and the percent of points within 25% of the prediction (within the dashed lines). Values in parentheses show the same statistics for extrapolations where the channel 13 gradient terms are omitted and Eq. (8) is used.

Fig. 7

Extrapolated MAE as a function of 0-h MAE for different extrapolation lengths. Red markers indicate extrapolations where the training and/or output time is 2100 UTC, whereas blue markers show all other times. The solid black line indicates the extrapolated MAE using the linear regression model [Eq. (7)]. Dashed black lines indicate where the extrapolated MAE is within 25% of the expected value from the regression. The percentage of points above and below the solid line, and the percentage within the dashed lines are listed in Table 2 along with Pearson correlation coefficients (columns 3 to 6).

JARS_15_3_038501_f007.png

Table 2

List of training and output times for the various extrapolation lengths shown in Fig. 7 along with statistics comparing observed extrapolation MAEs with the values expected from the regression (columns 3 to 5). Column 6 lists the Pearson’s correlation coefficient between the extrapolation MAE and the 0-h MAE of the training image. Values in parentheses represent results when the two channel 13 gradient terms are omitted from the extrapolation cost function.

Extrapolation length (h)Times (UTC)Errors larger than expected (%)Errors smaller than expected (%)Errors within 25% of expected (%)Correlation between extrapolated MAE and 0-h training MAE
11700 to 180094.7 (98.8)5.3 (1.2)99.3 (98.5)0.97 (0.97)
3All99.6 (99.9)0.4 (0.1)30.1 (24.8)0.85 (0.83)
1500 to 1800100.0 (99.9)0.0 (0.1)41.5 (38.8)0.93 (0.93)
1800 to 210099.3 (99.9)0.7 (0.1)18.9 (10.8)0.80 (0.81)
41700 to 210099.7 (100.0)0.3 (0.0)18.4 (8.6)0.82 (0.83)
61500 to 2100100.0 (100.0)0.0 (0.0)7.5 (5.9)0.73 (0.75)
182100 to 150096.6 (96.8)3.4 (3.2)26.3 (37.3)0.95 (0.94)
21All85.6 (83.9)14.4 (16.1)77.5 (76.4)0.96 (0.96)
1800 to 150082.1 (83.1)17.9 (16.9)81.3 (77.1)0.96 (0.95)
2100 to 180089.1 (84.8)10.9 (15.2)73.7 (75.6)0.96 (0.96)
221700 to 150071.1 (77.3)28.9 (22.7)82.1 (75.6)0.94 (0.94)
24All44.9 (46.9)55.1 (53.1)94.4 (93.8)0.95 (0.95)
1500 to 150053.5 (53.6)46.5 (46.4)92.6 (92.9)0.96 (0.97)
1700 to 170045.5 (50.5)54.5 (49.5)96.1 (95.1)0.96 (0.96)
1800 to 180039.6 (44.1)60.4 (55.9)97.1 (96.6)0.98 (0.97)
2100 to 210041.2 (39.7)58.8 (60.3)91.8 (90.6)0.93 (0.93)
48All13.6 (10.8)86.4 (89.2)83.4 (75.1)0.94 (0.94)
1500 to 150017.9 (12.1)82.1 (87.9)88.7 (82.3)0.95 (0.95)
1700 to 170011.0 (9.7)89.0 (90.3)85.7 (80.8)0.95 (0.95)
1800 to 18009.1 (7.6)90.9 (92.4)83.7 (75.6)0.97 (0.96)
2100 to 210016.5 (13.8)83.5 (86.2)75.7 (61.9)0.91 (0.91)

The deviation from the predicted MAE is smallest for 1-h extrapolations—99.3% of actual MAEs are within 25% of the predicted value. This value is even larger than that for the 24-h extrapolations, on which the prediction formula is based. The observed MAEs in the 1-h extrapolations are larger than expected 94.7% of the time, however. Over 99% of observed MAEs for 3-, 4-, and 6-h extrapolations are also larger than expected and the percent of points within 25% of their predicted values decreases to 30.1%, 18.4%, and 7.5%, respectively (Table 2 and Fig. 7). These extrapolation lengths (3, 4, and 6 h) all include output times of 2100 UTC. In the case of the 3-h extrapolations, 1500 to 1800 UTC MAEs fall within 25% of the predicted MAE 41.5% of the time, over 2 times more often than 1800 to 2100 UTC MAEs (Table 2 and Fig. 7). 21-h extrapolations also include two sets of times (1800 to 1500 and 2100 to 1800 UTC) and the extrapolations including 2100 UTC are again more likely to be larger than predicted (89.1% compared to 82.1%) and less likely to be within 25% of the predicted value (73.7% compared to 81.3%) (Table 2 and Fig. 7).

Of the extrapolations shown in Fig. 7, only the 48-h extrapolations (bottom right) have the same training and extrapolation times of day, therefore controlling for differences in Sun-satellite geometry. The 48-h MAEs are smaller than expected 86.4% of the time, indicating a positive bias for the regression at 48 h. 83.4% of 48-h MAEs are within 25% of the predicted value, however, which is higher than for any other extrapolation length considered except 1 and 24 h (Table 2 and Fig. 7).

The variability of extrapolation MAE by time of day and time of year (Figs. 4 and 5) is partly explained by corresponding variations in the 0-h training MAE. The top and bottom panels of Fig. 8 are similar to Figs. 4 and 5, respectively but show the deviation from the MAE predicted by Eq. (7). Figure 8 shows some seasonal dependency, with mean MAE deviations being more negative in the summer and early autumn (minimum of 1.0% reflectance for channel 3 in September), and more positive in winter and spring (maximum of 0.6% reflectance for channel 1 in February). The seasonal amplitudes of MAE deviation have a maximum of 1.5% reflectance [channel 3, Fig. 8(a)]. This amplitude is 64.3% smaller than the seasonal amplitude of MAE itself which reaches 4.2% reflectance (channel 1, Fig. 4). Similarly, differences in MAE deviation by channel and time of day are smaller than the differences in MAE itself. In Fig. 5, median MAEs vary from 1.1% to 13.2% reflectance (channel 4 at 1500 UTC and channel 3 at 2100, respectively), whereas the variability in median MAE deviation varies from 1.4 to 0.1 [channel 3, 2100 UTC and channel 4, 1500 UTC, respectively, Fig. 8(b)]. This range in median MAE deviation is only 10.7% of the range in actual median MAE.

Fig. 8

(a) Mean monthly deviation from the expected 24-h MAE given the 0-h MAE of the training image for 1800 to 1800 UTC extrapolations. (b) Deviation from the expected 24-h MAE given the 0-h MAE of the training image for different channels and times of the day. Red lines indicate the median, blue boxes indicate the 25th and 75th percentiles, and the dashed lines extend to the minimum and maximum values.

JARS_15_3_038501_f008.png

Although the ability to predict MAE does not help correct pixel-level errors, it can provide guidance to users on the reliability of SW extrapolations. Forecasters and automated algorithms can give less weight to the extrapolations when expected MAE is high, perhaps relying more heavily on other sources of weather information (e.g., ground-based radar). Equation (7) [or Eq. (8)] can also be used to optimize the choice of training data by weighing the age of the data against the 0-h MAE (see Sec. 6.3 for further discussion).

5.

Case Studies

Three case studies are presented in this section to provide examples of extrapolated imagery and illustrate some of the strengths and limitations of the extrapolation method that are not obvious from the quantitative error analysis. The first two cases show 24-h extrapolations valid at 1800 UTC over the CONUS alongside observed imagery from the same extrapolation time for comparison. There is a summer and a winter case which have relatively low and high errors, respectively. The third case uses 1-min ABI mesoscale scans of 2019 Hurricane Barry to provide an animation of extrapolated imagery during night. The implications of using mesoscale instead of CONUS sector data are discussed in Sec. 6.5. For all cases, reflectance for channels 4, 5, and 6 (i.e., the cirrus, snow/ice, and particle size bands, respectively) are multiplied by 1.5 to enhance contrast while keeping the color scale constant for all SW channels. Cloud-free areas are removed and appear black in both extrapolated and observed imagery, but some surface features are still visible below optically thin clouds.

5.1.

Summer: August 16, 2018

Figure 9 shows the observed (a) and 24-h extrapolated (b) imagery for each SW channel over the CONUS valid at 1800 UTC on August 16, 2018. The MAEs for each channel are noted on the left-hand side for reference. Channel 4 has the smallest MAE (0.85% reflectance), whereas channel 3 has the largest (11.30% reflectance). One area of notable differences in channel 3 is off the coast of Maryland where the extrapolated imagery is brighter than observed (Fig. 9, red markers). The brighter extrapolated imagery around the red marker is due to differences in surface reflectance between ocean and land. Observations in this area show a thin cloud layer throughout with pockets of thicker, brighter clouds. The training pixels used to extrapolate the imagery in this area also have thin clouds but are mostly over land (not shown). Because the clouds are optically thin, the land surface was visible through the clouds in the training imagery. That land surface is brighter than the ocean and causes the extrapolated imagery to be brighter than observed. The impact of the land/ocean differences in this area is most pronounced in channel 3 because this channel is most sensitive to land and vegetation type,19 but the impact can be seen in other channels as well, especially channel 5. Attempts to improve the extrapolations by incorporating surface reflectance information were unsuccessful (not shown). The issue of surface reflectance beneath optical thin clouds and related future work considerations are discussed further in Sec. 6.4.

Fig. 9

(a) Observed and (b) 24-h extrapolation of SW GOES-16 ABI channels at 1800 UTC August 16, 2018. Areas of clear-sky have been blacked out and the red marker indicates the general area discussed in the text.

JARS_15_3_038501_f009.png

The extrapolation was more successful in the convection found in Iowa and southern Minnesota. Figure 10 shows the results zoomed in on this region for a subset of the ABI channels. The observations and extrapolations are still discernable from one another, but the shape and texture of the clouds are similar.

Fig. 10

As in Fig. 9 but zoomed in on convection in the Minnesota/Iowa area for select ABI channels.

JARS_15_3_038501_f010.png

5.2.

Winter: February 27, 2019

Compared to the 16 August example, performance of the algorithm is notably worse for the 24-h extrapolations valid at 1800 UTC on February 27, 2019—both in terms of MAE and in qualitative comparisons of the imagery (Fig. 11). The impacts of surface reflectance are even more pervasive in this example. Snow-covered land and ice-free rivers beneath thin clouds are visible in the observed channels 1 through 3 over Iowa and Eastern Nebraska (Fig. 12 and red markers in Fig. 11), but the extrapolated imagery is too dark and does not show surface features. This is because many of the training pixels that went into creating the extrapolated imagery here were from thin clouds over snow-free land or ocean (not shown). The same issue is responsible for the darker-than-observed reflectance the northeast portion of Fig. 3 in Sec. 3.1.

Fig. 11

(a) Observed and (b) 24-h extrapolation of SW GOES-16 ABI channels at 1800 UTC on February 27, 2019. Areas of clear-sky appear black. Red and green markers indicate two areas discussed in the text.

JARS_15_3_038501_f011.png

Fig. 12

As in Fig. 11 but zoomed in on the cloud system in the Missouri/Kansas/Iowa area for select ABI channels.

JARS_15_3_038501_f012.png

There are also large errors in the cloud system from Northern Illinois and Indiana down to Texas, especially in channel 5 [i.e., the snow/ice band (Fig. 12) and green markers in Fig. 11]. These errors are due to the lower liquid cloud showing through the thin upper cirrus cloud and causing errors in a similar manner to surface reflectance differences. Animations of the observed channel 5 make the separation between these cloud layers apparent because they move in different directions and at different speeds (not shown). Animations also suggest this upper cloud layer covers most of the area but is very thin in some areas, so the brighter, lower cloud below is still visible in the observations. In the extrapolated imagery, the cirrus clouds obscure more of the lower cloud because many of the training pixels had similarly thin cirrus clouds, but no lower cloud beneath (not shown).

5.3.

Hurricane Barry: July 13–14, 2019

Since the intended application of this work is to provide SW extrapolations through nighttime, this example shows actual nighttime extrapolations even though only qualitative comparisons to VIIRS DNB observations are possible. Hurricane Barry from July, 2019 was chosen for this case study because it also highlights a potential application of the extrapolated imagery: detecting the center of low-level circulation in hurricanes and tropical storms at night. High cirrus clouds often obscure lower clouds in LW imagery but are more transparent in the SW, so low-level circulation is best inferred from SW imagery.10 Since only LW is available at night, scientists monitoring the storms often experience “sunrise surprise” when the first morning SW imagery becomes available and the low-level circulation more apparent.9

Although the previous case studies have highlighted some limitations associated with thin cirrus clouds, nighttime extrapolations may still provide value to forecasters in the absence of SW observations. Video 1 shows an animation of the observed channel 13 brightness temperature (a) and channel 2 reflectance (b), along with the extrapolated reflectance for channel 2 (c), 4 (d), 5 (e), and 6 (f) from 2151 UTC on July 13, 2019 to 1400 UTC on July 14, 2019. Figure 13 also shows imagery from 2151 UTC, the first frame of Video 1. The extrapolated imagery was trained using observations from 2150 UTC on 13 July. This training time was chosen because channel 2 (i.e., the red visible band) has the lowest 0-h MAE of all images after 1800 UTC on 13 July.

Fig. 13

Mesoscale GOES-16 imagery of Hurricane Barry at 2216 UTC: (a) observed channel 13 brightness temperature; (b) observed channel 2 reflectance; (c) extrapolated channel 2 reflectance; (d) extrapolated channel 4 reflectance; (e) extrapolated channel 5 reflectance; and (f) extrapolated channel 6 reflectance. Extrapolations were trained at 2215 UTC. Reflectance for channels 4, 5, and 6 is multiplied by 1.5 to increase contrast while maintaining the same color scale for all SW images. This is the first frame in Video 1 (Video 1, MOV, 36 MB [URL: https://doi.org/10.1117/1.JRS.15.038501.1]).

JARS_15_3_038501_f013.png

The extrapolated ABI channel 2 reflectance at 0741 UTC is also shown in Figure 14(b) beside VIIRS DNB imagery from approximately the same time (a). The brightest areas in the VIIRS image are of city lights, which are visible even beneath clouds in many areas of Texas, Louisiana, and Mississippi. The bright white stripe in the convective cell south of the Texas-Louisiana border is an artifact in the VIIRS imagery. Comparing the extrapolated ABI and observed VIIRS imagery reveals very similar structure in the main convective cells along the coast and in the Gulf of Mexico. The dark areas along the Louisiana and eastern Texas coastline in the extrapolated ABI imagery are not similarly dark in the VIIRS image, but do appear to be associated with a layer of cirrus, as seen by the wispy texture in the VIIRS image. The VIIRS imagery also shows more clouds in northeast Texas and northwest Louisiana, which were missed by the level 2 ABI clear sky mask, resulting in no extrapolated imagery in the area.

Fig. 14

(a) The VIIRS DNB (0.705  μm) radiance at 7:40 UTC on July 14, 2019 and (b) the extrapolated ABI channel 2 (0.64  μm) reflectance at 7:41 UTC on July 14, 2019.

JARS_15_3_038501_f014.png

When animated in Video 1, the clouds in the northwest portion of the extrapolated ABI imagery also jump around due to the degraded nighttime quality of the clear sky mask. The discontinuity in the video at approximately 1100 UTC on 14 July is due to the mesoscale sector shifting northeastward. Despite these artifacts, the low-level circulation of clouds is more apparent in the extrapolated SW animations than the observed channel 13 brightness temperature. After sunset, the low-level circulation is most apparent in extrapolated channels 2 and 5 in eastern and coastal Texas. The low-level clouds in this area appear as bright clouds moving south/southeastward beneath darker upper-level clouds moving northeastward. The ability to visualize this vertical wind shear more easily than with LW imagery alone may aid in hurricane monitoring and forecasting.

6.

Discussion

6.1.

Relative Importance of LW Inputs

The relative importance of different LW inputs to the extrapolation algorithm is generally consistent with the established remote sensing applications of ABI channels. All LW water vapor channels (8, 9, and 10) are omitted from the algorithm because they hinder the extrapolations. These channels primarily contain information about the moisture content of the atmosphere above cloud top and are thus used for tracking water vapor and the jet stream.19,2430 The only SW channel with similar properties is channel 4 and water vapor channels do lower RMSE slightly for this channel in particular (not shown). Also omitted from the algorithm, channel 12 is sensitive to ozone absorption and used to discern upper troposphere dynamics as well as total column ozone.19 Since there is no similarly ozone-sensitive SW channel, it is unsurprising that this channel hindered the extrapolations.

Channels 11 and 16 are the most important for the algorithm in terms of their impact on overall RMSE: removal of channel 11 (16) increased RMSE from 11.4% to 11.9% (12.2%) reflectance. The importance of channel 11 likely lies in its ability to discern cloud top phase, in which channels 5 and 6 are especially sensitive.19,21 The importance of channel 16 is consistent with its importance to derived cloud products such as cloud top height due to its ability to discern upper-troposphere features.19,3235 Individually, channels 13, 14, and 15 are each less important than channels 11 and 16 likely because these three channels are all very similar, just with varying sensitivities to water vapor.19,31 Removal of any one of these channels has only a slight impact on performance, since much of the same information is contained in the remaining two channels.

6.2.

Complications from Sun-Satellite Geometry

The sensitivity of the algorithm to the time of day, as seen in Figs. 5 and 7, is likely the result of differences in Sun-satellite geometry. Because LW data are not impacted by the position of the Sun, extrapolations are dependent on the Sun’s position at the training time. Comparing extrapolations to observations at times of day other than the training time therefore introduces an additional source of discrepancy associated with the illumination conditions in the observations. This is a limitation of the evaluation method rather than an error in the extrapolations. The extrapolations are not meant to extrapolate the effects of Sun position since that would defeat the purpose of producing nighttime extrapolations. We minimize the impact of changes in Sun position by doing most analysis with 24-h extrapolations. Unfortunately, we cannot change the length of the extrapolation while controlling for Sun-satellite geometry except by implementing an extrapolation length with a factor of 24 h.

Differences in Sun-satellite geometry between training and extrapolation times are likely responsible for the higher than predicted MAEs for extrapolation lengths other than 24 and 48 h. When the training and extrapolation times of day are not the same, 71.1% to 100.0% of extrapolated MAEs are larger than predicted by Eq. (7) (Table 2). Since Eq. (7) does not account for changes in Sun position, the higher than predicted MAEs are actually expected for these extrapolation lengths.

Sun position also explains the high 0-h MAEs of 2100 UTC observations and the large deviations from predicted MAE when the extrapolation time is 2100 UTC but the training time is not. At 2100 UTC, the Sun is over the Pacific Ocean, resulting in a large RAA (>60  deg) between the Sun and satellite (GOES-16 is located at 75.2°W over the East Coast of the United States). Larger RAA results in more shadows which are difficult to extrapolate. When 2100 UTC is the training time, the errors associated with the large RAA are incorporated into the 0-h MAEs, but when 2100 UTC is only the extrapolation time, the large RAA only impacts the verification imagery. This is likely why extrapolation times of 2100 UTC have larger deviations from the MAE regression than other extrapolations of the same length [e.g., 3-h MAEs for 1500 to 1800 UTC fall within 25% of predicted 41.5% of the time, but for 1800 to 2100 UTC extrapolations this value is only 18.9% (Table 2)].

6.3.

Choosing Training Data

Given the complexity of Sun-satellite geometry, we believe the regressions developed using 24-h MAEs [Eqs. (7) and (8)] are the best predictors of error. Compared to 48-h extrapolations, which also control for Sun-satellite geometry, the regression has a positive bias (only 13.6% of points are above the regression line). However, 48 h is much a longer extrapolation time than would be used in practice so it was inappropriate to include these points in the regression.

We recommend using Eq. (7) [or Eq. (8)] to optimize the choice of training time by weighing older observations with smaller 0-h MAEs versus more recent observations with higher 0-h MAEs. Using this decision-making and the CONUS scans used herein, observations from 1500, 1700, 1800, and 2100 UTC would be used 85.7%, 1.7%, 5.1%, and 7.5% of the time, respectively, for extrapolation times of 0600 UTC the following day (about midnight over the CONUS). For extrapolation times of 1500 UTC, the following day (about dawn over the West Coast in winter), 1500, 1700, 1800, and 2100 UTC training times would be used 1.9%, 16.7%, 69.6%, and 11.8% of the time, respectively. These statistics consider each channel individually, since 0-h MAE is specific to each channel. For 0600 UTC extrapolations, each channel would have used the same training time 68.5% of the time, but for 1500 UTC extrapolations, this value is only 34.1%.

Users will have to weigh the cost/benefit of using different training times for different channels and changing the training time throughout the night. It may be desirable to choose a constant training time of day so as to not compute 0-h MAE and predict MAE for each time and channel throughout the day. Under these circumstances, we would recommend choosing 1800 UTC since it is about solar noon and the most popular training time for extrapolations to 1500 UTC. Note that these times apply to GOES-16 specifically—for GOES-17 (137.2°W) and Himawari-8 (140.7°E), the times should be shifted by 4 and 12 h, respectively. For example, if 1800 UTC is used for GOES-16, then 2200 UTC should be used for GOES-17 and 0600 UTC for Himawari-8 to preserve similar Sun-satellite RAAs.

Times of day when only part of the CONUS is illuminated present unique circumstances and there are several options for extrapolating the night/terminator portion of the domain. The night/terminator portion could be extrapolated using the daytime portion of the domain, or using observations from a previous time when the whole domain was illuminated. In the case of morning/dawn imagery, either of these approaches seem appropriate since 1500 UTC (morning over the CONUS) scans have relatively low 0-h MAEs. With evening/dusk imagery, we strongly caution against using the daytime portion to extrapolate the night/terminator portions since 2100 UTC observations have relatively high 0-h MAEs.

For research and analysis of past weather events, errors may be reduced by interpolating SW observations through the night using observations from both the previous and subsequent days.

6.4.

Optically Thin Clouds

Case studies suggest optically thin clouds are a major source of error because such clouds are more opaque in the LW than SW.10 In these cases, the SW observations are impacted by the lower cloud or surface which is visible through the higher cloud. The LW brightness temperatures are less impacted by the underlying conditions, so those conditions are not well accounted for in the extrapolation algorithm.

Many attempts to reduce errors associated with surface reflectance beneath optically thin clouds were made and all were unsuccessful. Surface reflectance information from numerical weather prediction (NWP) was considered as an additional term in the algorithm’s cost function, but resulted in higher errors for optically thick clouds because the surface reflectance is irrelevant information in such cases. We considered using the level 2 GOES-16 cloud optical depth product to isolate areas of thin clouds where the surface was visible, but the product has decreased accuracy at night and over snow,5 precisely when and where accuracy is most needed to enhance SW extrapolations. As advancements are made in optical depth retrievals over snow and ice at night, special treatment of thin clouds in the extrapolations should be reconsidered.

Future work may also consider using convolutional neural networks (CNNs) to remove training pixels where the surface is visible beneath optically thin clouds. CNNs are a popular area of research in image processing because they can identify objects (e.g., clouds and land surface features) using texture and spatial context from the surrounding pixels. Studies have demonstrated the potential for CNNs to enhance cloud detection using high-resolution (<100  m) LEO satellites such as Landsat-8.4044 One study had overall cloud detection accuracy of 97.05% from Landsat-8 images and provided examples of successful identification of clouds over snow and ice.42 Some CNNs also distinguish between optically thin and thick clouds,40,41 which would allow for special treatment of areas where the surface is visible.

Although many studies have demonstrated the utility of CNNs on high-resolution (<100  m) imagery, the technique would need to be adjusted for the scale of GOES-16 imagery. All of the CNN studies previously mentioned used imagery from LEO satellites where each pixel is about 10 to 100 m across, but GOES-16 pixels are at least 0.5 km and LW pixels are at least 2 km across. Additional work would be needed to adapt the CNNs to the coarser resolution imagery since the patterns present at 10- to 100-m scales likely differ from those present at 0.5- to 2-km scales.

6.5.

Domain Size Considerations

Low-level clouds and cloud texture may also benefit from domains smaller than the CONUS, such as in the Hurricane Barry example that used a mesoscale domain (Sec. 5.3). Adding a physical distance term to the cost function [Eq. (5)] did not improve RMSE in 24-h CONUS extrapolations, possibly because clouds move and evolve too much over 24 h. Shorter extrapolations may benefit from such a term, however. For longer extrapolations, an object-based domain may be appropriate where extrapolation pixels in some mesoscale features are extrapolated using training pixels from that same feature in the training time. This is essentially what was done in Sec. 5.3 for Hurricane Barry using mesoscale sector data.

Mesoscale object-based extrapolations could benefit thin clouds by reducing the probability of different surfaces or low-level cloud conditions between the training and extrapolation pixels. It would likely improve extrapolation cloud texture and shadowing by implicitly grouping pixels with similar Sun-satellite geometry as well. The channel 13 gradient terms extrapolate such texture and shadows by inferring where shadows are based on cloud top height differences, but the location of shadows is dependent on the position of the Sun and satellite relative to the clouds. For example, shadows in the morning appear to the west of tall clouds, but in the evening, they appear to the east. The assumption underlying the inclusion channel 13 gradient terms is that shadows are cast in roughly the same direction across the domain.

The errors associated with this assumption of uniform shadowing scale with the size of the domain since the larger the domain, the more the orientation of shadows with respect to tall clouds will vary. In the extreme case of full-disk sectors, which include the entire hemisphere viewed by GOES-16, channel 13 gradient terms are unlikely to provide any benefit to cloud texture or shadow extrapolation due to the large variation in Sun-satellite viewing geometry across the domain. In smaller mesoscale domains, the variability in Sun position is minimal so all shadows are cast in roughly the same direction, making them easier to extrapolate using the channel 13 gradients.

6.6.

Additional Applications

The algorithm used herein to extrapolate SW GOES-16 ABI channels through the night could be applied to similar geostationary satellite imagers such as the ABI onboard GOES-17 and the Advanced Himawari Imager on Himawari-8, as well as derived (level 2) products from any of these imagers. With modification, the method may also be able to fill some of the outages on GOES-17 due to the loop heat pipe (LHP) issue. It may also be possible to extrapolate the reflected portion of ABI channel 7 through the night.

6.6.1.

Derived products

Many derived GOES-16 ABI level 2 products degrade at night due to the lack of SW imagery.46 These products could be derived using extrapolated nighttime reflectance, but extrapolations of the daytime derived products may be more accurate. For example, recent efforts have applied similar approaches to those described herein to extrapolate daytime satellite analyses of aircraft icing conditions and cloud optical thickness into nighttime.16,45 Manuscripts describing these applications are in preparation. We expect that extrapolating the daytime derived products themselves may produce more accurate results since errors from individual extrapolated channels could compound during the product’s derivation. For more complex algorithms or data fusion approaches that merge satellite data with other data (e.g., from radar or NWP models), we recommend that the extrapolation algorithm be applied as late as possible in the processing chain but prior to merging satellite data with other data sources. Work is underway to quantify the value of extrapolating daytime products into nighttime in comparison with operational nighttime products.

6.6.2.

Applications to GOES-16 channel 7

The extrapolation algorithm could be adapted for ABI channel 7 (i.e., the SW window band) which includes both emitted terrestrial and reflected/scattered solar radiation. Channel 7 is useful in detecting fog and low stratus clouds, but interpretation of the data is complex since separating the terrestrial and solar components is not trivial.19,22,23 At night, only terrestrial emissions are included in channel 7, so these terrestrial emissions could be extrapolated from nighttime into daytime using the extrapolation algorithm presented herein. The daytime solar component could then be computed by subtracting the extrapolated emitted radiance from the observed daytime radiance. The daytime reflected component could then be extrapolated through the night and so forth. This “leap-frog” approach could provide estimates of both the solar and terrestrial components of channel 7 at all times of day.

6.6.3.

GOES-17 Loop Heat Pipe Issues

The GOES-17 satellite has the same ABI as GOES-16 but due to an issue with the instrument’s LHP, the instrument overheats at night around the vernal and autumnal equinoxes.46 Mitigation efforts adjusting the instrument operations have reduced the impact so that GOES-17 still delivers 98% of its intended data, but most LW channels still experience some loss of data at night around the equinoxes.46 Furthermore, this level of mitigation does not satisfy all GOES-17 data users, particularly those that rely on derived products. For example, CERES relies on data from GOES-17 for producing climate data records of cloud properties and radiative fluxes. Without further mitigation, the GOES-17 LHP problem would lead to a discontinuity in the CERES climate data record. As a result, extrapolation algorithms similar to those presented herein have been developed and applied to further mitigate these outages by extrapolating GOES-17 radiance data from hours unaffected by the LHP problem to replace data which is lost or corrupted due to the LHP problem.16,17,47 This strategy was found to work well for CERES and has been fully implemented in CERES operations to maintain data product accuracies and continuity.18

Despite these successes, some limitations remain in the most extreme cases where the peak impact of the LHP problem leaves only channel 13 unaffected. In these cases, one channel may not be sufficient to accurately extrapolate all others with the accuracy needed for some applications. In addition, ABI channels 8 to 10 (i.e., the water vapor channels), which are important for deriving winds and monitoring moisture in the atmosphere, are proving to be more difficult to accurately extrapolate given their unique characteristics and low correlation to LW window radiances under some conditions.19,2430 Although it is unlikely the extrapolation approach can completely mitigate the GOES-17 LHP problem, the work conducted to date demonstrates valuable recovery and mitigation can be achieved. Additional research underway will provide better statistics on the quality and limitations of this application.

7.

Conclusions

We present a method to extrapolate SW GOES-16 ABI data though night by extracting SW data from the previous day where the LW observations are most similar to each extrapolation pixel. The extrapolation algorithm can also be applied during the day so the accuracy of the extrapolations can be assessed using SW observations. Comparisons of daytime extrapolations and observations show as follows.

  • The 0-h MAE of the training observations can be used to predict the MAE of extrapolations.

  • 0-h MAEs are often smaller earlier in the day, so training the algorithm with evening observations is not recommended.

  • The extrapolation method struggles with thin clouds because they are more opaque to LW radiation than SW radiation. As a result, lower clouds and surface features can be visible beneath thin clouds in SW observations but are difficult to extrapolate using only LW observations.

Considerations for future work to improve the algorithm include as follows.

  • Leveraging CNN-based cloud classification algorithms to identify thin clouds where surface or lower-level clouds are visible so these areas can be given special consideration.

  • Object-based extrapolations where extrapolated pixels are built using training pixels from the same mesoscale cloud feature at the previous training time.

Despite current limitations with thin clouds, the ability to extrapolate SW data through the night could aid in weather product development by reducing or eliminating the need for terminator- and night-specific algorithms. Human forecasters and scientists may also find it useful to visualize SW imagery at night. Finally, extrapolation methods such as the one described herein have the potential to improve data fusion, the consistency of derived products and the empirical use of ABI channel 7 at all times of day, and mitigate the impacts from outages in GOES-17 ABI LW channels due to the LHP issue.

Acknowledgments

The authors would like to thank Katja Friedrich and Joshua Lave for helpful feedback which improved this manuscript, and Kurt Hansen for suggesting Hurricane Barry as a case study. We also would like to thank an anonymous reviewer for providing constructive comments and suggestions, particularly regarding the DNB comparison. This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation, under Cooperative Agreement No. 1852977. A portion of this research is in response to requirements and funding by the Federal Aviation Administration (FAA). The views expressed are those of the authors and do not necessarily represent the official policy or position of the FAA.

References

1. 

B. C. Bernstein et al., “Current icing potential: algorithm description and comparison with aircraft observations,” J. Appl. Meteor., 44 (7), 969 –986 (2005). https://doi.org/10.1175/JAM2246.1 Google Scholar

2. 

M. J. Pavolonis, J. Sieglaff and J. Cintineo, “Spectrally enhanced cloud objects—a generalized framework for automated detection of volcanic ash and dust clouds using passive satellite measurements: 1. Multispectral analysis,” J. Geophys. Res. Atmos., 120 (15), 7813 –7841 (2015). https://doi.org/10.1002/2014JD022968 JGRDE3 0148-0227 Google Scholar

3. 

J. A. Haggerty et al., “Development of a method to detect high ice water content environments using machine learning,” J. Atmos. Oceanic Technol., 37 (4), 641 –663 (2020). https://doi.org/10.1175/JTECH-D-19-0179.1 JAOTES 0739-0572 Google Scholar

4. 

A. Heidinger and W. C. Straka, “ABI cloud mask algorithm theoretical basis document,” 3 (2012). Google Scholar

5. 

A. Walther, W. Straka and A. K. Heidinger, “ABI algorithm theoretical basis document for daytime cloud optical and microphysical properties (DCOMP),” 3 (2013). Google Scholar

6. 

P. Minnis and P. W. Heck, “GOES-R advanced baseline imager (ABI) algorithm theoretical basis document for nighttime cloud optical depth, cloud particle size, cloud ice water path, and cloud liquid water path,” 3 (2012). Google Scholar

7. 

T. J. Schmit et al., “Introducing the next-generation advanced baseline imager on GOES-R,” Bull. Am. Meteorol. Soc., 86 (8), 1079 –1096 (2005). https://doi.org/10.1175/BAMS-86-8-1079 BAMIAT 0003-0007 Google Scholar

8. 

K. Bessho et al., “An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites,” J. Meteorolog. Soc. Jpn., 94 (2), 151 –183 (2016). https://doi.org/10.2151/jmsj.2016-009 Google Scholar

9. 

C. W. Landsea and J. L. Franklin, “Atlantic hurricane database uncertainty and presentation of a new database format,” Mon. Weather Rev., 141 (10), 3576 –3592 (2013). https://doi.org/10.1175/MWR-D-12-00254.1 MWREAB 0027-0644 Google Scholar

10. 

S. D. Miller et al., “The dark side of Hurricane Matthew: unique perspectives from the VIIRS Day/Night Band,” Bull. Amer. Meteor. Soc., 99 (12), 2561 –2574 (2018). https://doi.org/10.1175/BAMS-D-17-0097.1 BAMIAT 0003-0007 Google Scholar

11. 

, “Geostationary Extended Observations (GeoXO),” https://www.nesdis.noaa.gov/GeoXO Google Scholar

12. 

J. H. Cross et al., “Statistical estimation of a 13.3  μm visible infrared imaging radiometer suite channel using multisensor data fusion,” J. Appl. Remote Sens., 7 (1), 073473 (2013). https://doi.org/10.1117/1.JRS.7.073473 Google Scholar

13. 

E. Weisz, B. A. Baum and W. P. Menzel, “Fusion of satellite-based imager and sounder data to construct supplementary high spatial resolution narrowband IR radiances,” J. Appl. Remote Sens., 11 (3), 036022 (2017). https://doi.org/10.1117/1.JRS.11.036022 Google Scholar

14. 

E. Weisz and W. P. Menzel, “Imager and sounder data fusion to generate sounder retrieval products at an improved spatial and temporal resolution,” J. Appl. Remote Sens., 13 (3), 1 (2019). https://doi.org/10.1117/1.JRS.13.034506 Google Scholar

15. 

W. L. Smith et al., “Improved severe weather forecasts using LEO and GEO satellite soundings,” J. Atmos. Oceanic Technol., 37 1203 –1218 (2020). https://doi.org/10.1175/JTECH-D-19-0158.1 JAOTES 0739-0572 Google Scholar

16. 

Jr. W. L. Smith, et al., “CERES cloud working group report,” (2019) https://ceres.larc.nasa.gov/documents/STM/2019-10/3-TUE_1045am_Smith.pdf Google Scholar

17. 

Jr. W. L. Smith, et al., “CERES cloud working group report,” (2020) https://ceres.larc.nasa.gov/documents/STM/2020-04/5_Clouds.CERES.STM.04.20.pdf Google Scholar

19. 

T. J. Schmit et al., “Applications of the 16 spectral bands on the advanced baseline imager (ABI),” J. Operational Meteor., 6 (4), 33 –46 (2018). https://doi.org/10.15191/nwajom.2018.0604 Google Scholar

20. 

L. Y. She et al., “Joint retrieval of aerosol optical depth and surface reflectance over land using geostationary satellite data,” IEEE Trans. Geosci. Remote Sens., 57 (3), 1489 –1501 (2019). https://doi.org/10.1109/TGRS.2018.2867000 IGRSD2 0196-2892 Google Scholar

21. 

C. B. Elsenheimer and C. M. Gravelle, “Introducing lightning threat messaging using GOES-16 day cloud phase distinction RGB composite,” Weather Forecasting, 34 (5), 1587 –1600 (2019). https://doi.org/10.1175/WAF-D-19-0049.1 Google Scholar

22. 

T. F. Lee, F. J. Turk and K. Richardson, “Stratus and fog products using GOES-8-9 3.9-um data,” Weather Forecasting, 12 (3), 664 –677 (1997). https://doi.org/10.1175/1520-0434(1997)012<0664:SAFPUG>2.0.CO;2 Google Scholar

23. 

M. Amani et al., “Automatic nighttime sea fog detection using GOES-16 imagery,” Atmos. Res., 238 104712 (2020). https://doi.org/10.1016/j.atmosres.2019.104712 ATREEW 0169-8095 Google Scholar

24. 

J.-R. Lee et al., “ABI water vapor radiance assimilation in a regional NWP model by accounting for the surface impact,” Earth Space Sci., 6 (9), 1652 –1666 (2019). https://doi.org/10.1029/2019EA000711 Google Scholar

25. 

J. Daniels et al., “GOES-R advanced baseline imager (ABI) algorithm theoretical basis document for derived motion winds,” (2012) https://www.star.nesdis.noaa.gov/goesr/docs/ATBD/DMW.pdf Google Scholar

26. 

T. J. Schmit et al., “Legacy atmospheric profiles and derived products from GOES-16: validation and applications,” Earth Space Sci., 6 1730 –1748 (2019). https://doi.org/10.1029/2019EA000729 Google Scholar

27. 

J. A. Otkin, “Assimilation of water vapor sensitive infrared brightness temperature observations during a high impact weather event,” J. Geophys. Res., 117 D19203 (2012). https://doi.org/10.1029/2012JD017568 JGREA2 0148-0227 Google Scholar

28. 

Y. Zhang, D. J. Stensrug and E. E. Clothiaux, “Benefits of the advanced baseline imager (ABI) for ensemble-based analysis and prediction of severe thunderstorms,” Mon. Weather Rev., 149 313 –332 (2021). https://doi.org/10.1175/MWR-D-20-0254.1 MWREAB 0027-0644 Google Scholar

29. 

L. Grasso et al., “Application of the GOES-16 advanced baseline imager: morphology of a preconvective environment on 17 April 2019,” Electron. J. Severe Storms Meteorol., 15 (2), 1 –24 (2020). Google Scholar

30. 

C. M. Gitro et al., “A demonstration of modern geostationary and polar-orbiting products for the identification and tracking of elevated mixed layers,” J. Operational Meteor., 7 (13), 180 –192 (2019). https://doi.org/10.15191/nwajom.2019.0713 Google Scholar

31. 

D. Lindsey et al., “10.35 um: atmospheric window on the GOES-R advanced baseline imagery with less moisture attenuation,” J. Appl. Remote Sens., 6 (1), 063598 (2012). https://doi.org/10.1117/1.JRS.6.063598 Google Scholar

32. 

A. J. Schreiner et al., “A comparison of ground and satellite observations of cloud cover,” Bull. Am. Meteor. Soc., 75 (10), 1851 –1861 (1993). https://doi.org/10.1175/1520-0477(1993)074<1851:ACOGAS>2.0.CO;2 BAMIAT 0003-0007 Google Scholar

33. 

D. P. Wylie and W. P. Menzel, “Eight years of high cloud statistics using HIRS,” J. Clim., 12 (1), 170 –184 (1999). https://doi.org/10.1175/1520-0442-12.1.170 JLCLEL 0894-8755 Google Scholar

34. 

A. K. Heidinger et al., “Using CALIPSO to explore the sensitivity to cirrus height in the infrared observations from NPOESS/VIIRS and GOES-R/ABI,” J. Geophys. Res., 115 (D4), 1 –13 (2010). https://doi.org/10.1029/2009JD012152 JGREA2 0148-0227 Google Scholar

35. 

H. Iwabuchi et al., “Cloud property retrieval from multiband infrared measurements by himawari-8,” J. Meteorolog. Soc. Jpn., 96B 27 –42 (2018). https://doi.org/10.2151/jmsj.2018-001 Google Scholar

36. 

M. O. Román et al., “NASA’s Black Marble nighttime lights product suite,” Remote Sens. Environ., 210 113 –143 (2018). https://doi.org/10.1016/j.rse.2018.03.017 Google Scholar

37. 

J. H. Friedman, J. L. Bentley and R. A. Finkel, “An algorithm for finding best matches in logarithmic expected time,” ACM Trans. Math. Softw., 3 (3), 209 –226 (1977). https://doi.org/10.1145/355744.355745 ACMSCU 0098-3500 Google Scholar

38. 

C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., 27 (3), 379 –423 (1948). https://doi.org/10.1002/j.1538-7305.1948.tb01338.x BSTJAN 0005-8580 Google Scholar

39. 

S. F. Gull, J. Skilling, “The entropy of an image,” Maximum-Entropy and Bayesian Methods in Inverse Problems,” Fundamental Theories of Physics, 287 –302 Springer, Dordrecht (1985). Google Scholar

40. 

Z. Shao et al., “Cloud detection in remote sensing images based on multiscale features-convolutional neural network,” IEEE Trans. Geosci. Remote Sens., 57 (6), 4062 –4076 (2019). https://doi.org/10.1109/TGRS.2018.2889677 IGRSD2 0196-2892 Google Scholar

41. 

Y. Chen et al., “Multilevel cloud detection for high-resolution remote sensing imagery using multiple convolutional neural networks,” Int. J. Geo-Inf., 7 (5), 181 (2018). https://doi.org/10.3390/ijgi7050181 Google Scholar

42. 

Y. Guo et al., “Cloud detection for satellite imagery using attention-based u-net convolutional neural network,” Symmetry, 12 (6), 1056 (2020). https://doi.org/10.3390/sym12061056 SYMMAM 2073-8994 Google Scholar

43. 

M. Segal-Rozenhaimer et al., “Cloud detection algorithm for multi-modal satellite imagery using convolutional neural networks (CNN),” Remote Sens. Environ., 237 111446 (2020). https://doi.org/10.1016/j.rse.2019.111446 Google Scholar

44. 

M. Wieland, Y. Li and S. Martinis, “Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network,” Remote Sens. Environ., 230 111203 (2019). https://doi.org/10.1016/j.rse.2019.05.022 Google Scholar

45. 

Jr. W. L. Smith, et al., “Exploring GOES-16 data to improve aircraft icing diagnoses,” https://ams.confex.com/ams/JOINTSATMET/meetingapp.cgi/Paper/360882 Google Scholar

47. 

S. Lindstrom, “Data fusion to mitigate loop heat pipe data dropouts with GOES-17,” (2019) https://cimss.ssec.wisc.edu/satellite-blog/archives/31740 Google Scholar

Biography

Allyson Rugg received her MS degree in atmospheric and oceanic sciences from the University of Colorado Boulder in 2020 and she is currently a PhD candidate in the University of Colorado Boulder. She also works at the Research Applications Laboratory of the National Center for Atmospheric Research where her areas of research include satellite remote sensing, cloud microphysics, and aviation weather safety.

Julie Haggerty is a project scientist at the National Center for Atmospheric Research. Her research involves the acquisition and analysis of remotely sensed data from airborne and satellite platforms. She received her BS and MS degrees in atmospheric science from the University of California, Davis, and her PhD in atmospheric and oceanic sciences from the University of Colorado. She has authored numerous scientific articles on remote sensing retrieval methods and applications to aviation icing.

Daniel Adriaansen received his MS degree in atmospheric sciences from the University of North Dakota in 2010. After graduating, he began his current position as an associate scientist at the National Center for Atmospheric Research in Boulder, Colorado. His primary research area focuses on the development of automated algorithms for the diagnosis and prediction of supercooled liquid water conditions for aviation interests.

William L. Smith, Jr., is a senior scientist in the Science Directorate at NASA Langley Research Center where he specializes in satellite remote sensing of clouds and radiation and develops weather and climate applications for the use of these data.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Allyson Rugg, Julie Haggerty, Daniel Adriaansen, and William L. Smith "Extrapolating shortwave geostationary satellite imagery of clouds into nighttime using longwave observations," Journal of Applied Remote Sensing 15(3), 038501 (8 July 2021). https://doi.org/10.1117/1.JRS.15.038501
Received: 14 December 2020; Accepted: 22 June 2021; Published: 8 July 2021
Advertisement
Advertisement
KEYWORDS
Clouds

Reflectivity

Satellites

Satellite imaging

Earth observing sensors

Shortwaves

Sun

Back to Top