|
1.IntroductionWeather conditions can severely limit visibility in outdoor scenes. In such cases, atmospheric phenomena such as fog and haze will significantly degrade visibility in the captured scene. Since visibility is dependent on the air, the amount of particles in the air will affect image visibility. This phenomenon is generally composed of water droplets or particles and cannot be ignored. Both the absorption and scattering of light by particles and gases in the atmosphere cause the visibility to decrease, whereas the scattering of particulate in visibility causes more serious damage than the absorption of light. As a result, distant object and part scenes are not visible. That is, the image loses contrast and color fidelity, and the visual quality of the scene is reduced. In a visual sense, the quality of the degraded image is unacceptable. Therefore, a simple and effective image scene recovery method is essential. Image dehazing is a challenging problem, and image recovery technology has attracted the attention of many researchers. The low visibility in hazy images affects the accuracy of computer vision techniques, such as object detection, face tracking, license plate recognition, satellite imaging, and so on, as well as multimedia devices, such as surveillance systems and advanced driver assistance systems. Hence, haze removal techniques are important for improving the visibility of images. Restoring hazy images is a particularly challenging case that requires specific strategies. Therefore, widely varying methods have emerged to solve this problem. In recent years, enhancing images represents a fundamental task in many image processing and vision applications. Proposed strategies for enhancing the visibility of a degraded image include the following. The first type is the nonmodule method, such as histogram equalization,1 Retinex theory,2 wavelet transform,3 and gamma correction curve.4 However, the shortcomings of these methods are that they seriously affect the clear region and also keep color fidelity less effectively. The second type is the module method, which depends on the physical mode. Compared to the nonmodule method, these methods achieve better dehazing results by modeling scattering and absorption and by using multiple different atmospheric conditions in input images, such as scene depth,5,6 multiple images,7–9 polarization angles,10,11 and geometry models.12,13 Narasimhan and Nayar7,8 developed an interactive depth map for removing weather effects, but their method had limited effectiveness. Kopf et al.13 presented a novel deep photo system for using prior knowledge of the scene geometry when browsing and enhancing photos. However, the method required multiple images or additional information to get a better estimate of scattering and absorption, which limited its applications. Hautière et al.12 designed a method using weather conditions and a priori structure of a scene to restore the image contrast for vehicle vision systems. A novel technique developed in Refs. 10, 14, and 15 exploited the partially polarized properties of airlight. The haze effect was estimated by using different angles of polarized filters to analyze the resulting images of the same scene. In other words, calculating the difference among these images enabled the use of the magnitude of polarization to estimate haze light components. Because the polarization light is not the major degradation factor, these methods have less robustness for scenes with dense haze. Another recently developed strategy used a module and a single hazy image as input information. This approach has recently become a popular way of eliminating image haze by different strategies.16–20 Roughly, these methods can be categorized as contrast-based and statistical approaches. An example of a contrast-based approach is the Tan17 method. In this case, the image restoration maximizes the local contrast while limiting the image intensity to be less than the global atmospheric light value. Tarel and Hautière19 combined a computationally effective technique with a contrast-based technique. Their method assumed that the depth map must be smooth except along edges with large depth jumps. The second category of statistical approaches includes the technique presented in Fattal,16 which employs a graphical model to solve ambiguous atmospheric light color and assumes the image shading and scene transmission are partially uncorrelated. According to this assumption, mathematical statistics were utilized to estimate the albedo of a scene and infer the transmission medium. The method provides a physically consistent estimation. However, because the variation of the two functions in Ref. 16 is not obvious, this method requires substantial fluctuation of color information and luminance in the hazy scene. He et al.18 developed a statistical approach for observing the dark channel and for roughly estimating the transmission map. Then, they refined the final depth map by using a relatively computationally expensive matting strategy.21 In this approach, pixels must be found through the entire image, which requires a long computation time. Nishino et al.20 used a Bayesian probabilistic concept by fully leveraging their latent statistical structures to estimate the scene albedo and depth from a single degraded image. A recent study by Gibson and Nguyen22 proposed a new image dehazing method based on the dark channel concept. Unlike the previous dark channel method, their method finds the average of the darkest pixels in each ellipsoid. However, this assumption in Ref. 22 may find several inaccurate pixels for those corresponding to bright objects. Fattal23 derived a local formation model that explains color lines in the context of hazy scenes and used the model to offset lines for recovering the scene transmission. In addition, Ancuti and Ancuti24 also proposed a fusion-based strategy for enhancing white balance and contrast in two original hazy image inputs. In other words, in order to keep the most significant detected features, the inputs in the fusion process are weighted by the specific calculation maps. Recently, artificial neural networks (ANNs) have been widely used in many different fields. Research topics related to ANNs have proved suitable for many areas, such as control,25,26 identification,27,28 pattern recognition,29,30 equalization,31,32 and image processing.33,34 The cerebellar model articulation controller (CMAC) model proposed by Albus35,36 is usually applied in ANNs. The CMAC model imitates the structure and function of the cerebellum of a human brain and it is similar to a local network. The CMAC model can be viewed as a basis function network that uses plateau basis functions to compute the output of the model for a given input data point. Therefore, only the basis functions assigned to the hypercube covering the input data are needed. In other words, for a given input vector, only a few of the network nodes (or hypercube cells) are active and will effectively contribute to the corresponding network output. Thus, the CMAC has good learning and generalization capabilities. However, the CMAC requires a large amount of memory for solving the problem of the high dimension,37,38 is ineffective for online learning systems,39and has relatively poor function approximation ability.40,41 Another problem is that it is difficult to determine the memory structure, e.g., to adaptively select structural parameters, in the CMAC model.42,43 Recently, several researchers have proposed various solutions for the above problems, including fuzzy membership functions,44 selection of learning parameters,45 topology structure,46 spline functions,47 and fuzzy C-means.48 Fuzzy theory embedded in the CMAC model has been widely discussed. Thus, a fuzzy CMAC called FCMAC49 was proposed. It takes full advantage of the concept of fuzzy theory and combines it with the local generalization feature of the CMAC model.49,50 A recurrent network is embedded in the CMAC model by adding feedback connections with a receptive field cell to the CMAC model,51 which has the advantage of dynamic characteristics (considering past output network information). However, the above-mentioned methods have several drawbacks. For example, the mapping capability of local approximation by hyper-planes is not good enough, and more hypercube cells (rules) are required. Therefore, this study developed a recurrent fuzzy cerebellar model articulation controller (RFCMAC) model to solve the above problems and to enable applications in widely various fields. A hybrid of the recurrent fuzzy CMAC and weighted strategy is used to process the image dehazing problem. The proposed method provides high-quality images and effectively suppresses halo artifacts. The advantages of the proposed method are as follows:
The rest of this paper is structured as follows. Section 2 discusses the theoretical background of light propagation in such environments. In Sec. 3, we introduce the proposed RFCMAC and weighted strategy for image dehazing. Section 4 presents the experimental results and compares the proposed approach with other state-of-the-art methods. Finally, conclusions are drawn in Sec. 5. 2.Theory of Light PropagationGenerally, a camera being used to take outdoor photographs obtains an image by the light of the receiving environment, such as the illumination of sunlight, reflecting light from a surface as shown in Fig. 1. Due to absorption and scattering, the light crossing the atmosphere is attenuated and dispersed. In physical terms, the number of suspended particles is low in sunny weather. Thus, the image quality is clear. In contrast, dust and water particles in the air during volatile weather scatter light, which severely degrades image quality. In such degraded circumstances, only 1% of the reflected light reaches the observer, and it causes poor visibility.54 McCartney55 also noted that haze is an atmospheric phenomenon. That is, the clear sky is obscured by dust, smoke, and other dry particles. In the captured image, the haze generates a distinctive gray hue, reducing visibility for the image. Based on the above, the physical theory of a hazy model can be expressed as where is the observed image with haze and denotes the observed RGB colors’ pixel coordinates. In Eq. (1), the hazy model consists of two main components, a direct attenuation and a veiling light (i.e., airlight). is the light reflected from the surfaces, or the haze-free image; represents the transmission values of reflected light. is the atmospheric light. The first component represents the direct attenuation or the direct transmission of the scene radiance. That is, attenuation results from the interaction between scene radiance and particles during transmission. In other words, it corresponds to the reflected light of the surfaces in the scene and reaches the camera directly without being scattered. The other component expresses the real color cast of the scene due to the scattering of atmospheric light. denotes the amount of light transmission between the observer and the surface. Assuming a homogenous medium, transmission is, therefore, , where is the medium attenuation coefficient and represents the distance between the observer and the considered surface. Since transmission is inversely proportional to depth, this feature obtains image depth information without additional sensing devices. Therefore, only the transmission map and the color vector of atmospheric light are needed to eliminate the hazing effect in the image.3.Proposed MethodThis section presents in detail our proposed method, which uses the RFCMAC model and a weighted strategy to recover scenes from the removal of a hazy image. Figure 2 shows the flowchart of the proposed method, and the details are presented in the following sections. 3.1.Estimation of Transmission Map Features Using RFCMAC ModelThe transmission map and atmospheric light have important roles in haze removal. Therefore, a good dehazing method with estimation of both the transmission map and the atmospheric light can appropriately process the recovery of a hazy image. Haze, which is generated by light attenuation, depends on the distribution of the number of particles in the air. According to Eq. (1), both the transmission map and the atmospheric light are important factors. Thus, the transmission factor and atmospheric lightness must be improved. This study proposes an RFCMAC model for estimating the transmission map more accurately. The RFCMAC model combines the traditional CMAC model, an interactive feedback mechanism, and a Takagi—Sugeno—Kang (TSK)-type linear function to obtain better solutions. The proposed model also adopts an interactive feedback mechanism, which has the ability to capture critical information from other hypercube cells. The structure of the RFCMAC and associated learning algorithm are presented as follows. 3.1.1.Structure of the RFCMAC modelThe performance of the proposed RFCMAC model is enhanced by using an interactive feedback mechanism in the temporal layer and a TSK-type linear function in the subsequent layer. Figure 3 shows the six-layered structure of the RFCMAC model. The structure realizes a similar fuzzy IF–THEN rule (hypercube cell). Rule : where represents the ’th input variables, denotes the local output variables, is the linguistic term using the Gaussian membership function in the antecedent part, is the output of the interactive feedback, and is the basis TSK-type linear function of input variables. The operation functions of the nodes in each layer of the RFCMAC model are described as follows. For the following description, represents the output of a node in the ’th layer.Layer 1 (input layer): The layer is used as an input feature vector , and the inputs are crisp values. This layer does not require adjustments of weight parameters. Each node need only directly transmit input values to the next layer. The corresponding outputs are calculated as Layer 2 (fuzzification layer): The layer performs a fuzzification operation and uses a Gaussian membership function to calculate the firing degree of each dimension. The Gaussian membership function is defined as follows: where and denote the mean and variance of the Gaussian membership function, respectively.Layer 3 (spatial firing layer): Each node of this layer receives the firing strength of the associated hypercube cell by the node of a fuzzy set in layer 2. All layer two outputs are collected in layer three. Specifically, each node performs an algebraic product operation on inputs to generate spatial firing strength . The layer determines the number of hypercube cells in the current iteration. For each inference node, the output function can be computed as follows: where denotes product operation.Layer 4 (temporal firing layer): Each node is a recurrent hypercube cell node, including the internal feedback (self-loop) and external interactive feedback loop. The output of the recurrent hypercube cell node depends on both the current spatial and previous temporal firing strengths. That is, each node refers to relative information from itself and other nodes. Because the self-feedback of the hypercube cell node is not sufficient to represent the all necessary information, the proposed model refers to relative information not only from the local source (node’s feedback from itself) but also from the global source (feedback from other nodes). The linear combination function of the temporal firing strength is described as follows: where represents recurrent weights and determines the compromise ratio between the current and previous inputs to the network outputs. and denote the interactive weights of the hypercube cells from itself and other nodes. is a connection weight from the ’th node to the ’th node and is a random value between 0 and 1. is the number of hypercube cells. Therefore, the compromise ratio between the current and previous inputs is between 0 and 1.Layer 5 (consequent layer): Each node is a function of a linear combination of input variables in this layer. The equation is expressed as Layer 6 (output layer): This layer uses the centroid of area (COA) approach to defuzzify a fuzzy output into a scalar output. Then the actual output is derived as follows: 3.1.2.Learning algorithm of the RFCMAC modelThe proposed learning algorithm combines structure learning and parameter learning when constructing the RFCMAC model. Figure 4 shows a flowchart of the proposed learning algorithm. First, the self-constructing input space partition in structure learning is based on the degree measure used to appropriately determine the various distributions of the input training data. In other words, the firing strength in structure learning is used to determine whether a new fuzzy hypercube cell (rule) should be added to satisfy the fuzzy partitioning of input variables. Second, the parameter learning procedure performs the back propagation algorithm by minimizing a given cost function to adjust parameters. The RFCMAC model initially has no hypercube cell nodes except the input–output nodes. According to the reception of online incoming training data in the structure and parameter learning processes, the nodes from layer 2 to layer 5 are created automatically. Parameters and in the initial model are randomly generated between 0 and 1. Structure learning algorithmGenerally, the main purpose of structured learning is to determine whether a new hypercube cell should be extracted from the training data. For each incoming pattern , the firing strength of the spatial firing layer can be defined as the degree to which the incoming pattern belongs to the corresponding cluster. The entropy measure is used to estimate the distance between each data point and each membership function. Entropy values between data points and current membership functions were calculated to determine whether to add a new hypercube cell. The entropy measure can be calculated using the firing strength from as follows: where and . Based on Eq. (9), the criterion for the degree measure is used to estimate and generate a new hypercube cell of new incoming data . The maximum entropy measure is calculated as follows: where is the number of the hypercube cell and is a prespecified threshold. In order to limit the number of hypercube cells in the proposed RFCMAC model, the threshold value will decay during the learning process. A low threshold leads to the learning of coarse clusters (i.e., a low number of hypercube cells are generated), whereas a high threshold leads to the learning of fine clusters (i.e., a high number of hypercube cells are generated). Therefore, the selection of the threshold value critically affects the simulation results. That is, determines whether the proper new hypercube cell is generated. Therefore, if , then a new hypercube cell is generated. Otherwise, the hypercube cell is not added.Parameter learning algorithmFive parameters of the model are entered in the learning algorithm and optimized based on the training data. The parameter learning occurs concurrently with the structure learning. For each piece of incoming data, five parameters (i.e., , , , , and ) are tuned in the RFCMAC model when the hypercube cells are newly generated or originally existed. Here, the gradient descent method is used to adjust the parameters of the receptive field functions and the TSK-type function. To clarify, consider the single-output case. The goal of the minimizing cost function is described as where denotes the desired output and is the model output for each discrete time . In each training cycle, from the starting input variables to the activity of the model output, are calculated by a feed-forward pass operation. According to Eq. (10), the error is used to regulate the weighted vector of the proposed RFCMAC model in a given number of training cycles. The well-known learning method of the backpropagation algorithm can be simplified as follows: where and represent the learning rate and the free parameters, respectively. denotes the pace factor for the learning rate in the search space. A low value may lead to a local optimal solution, whereas a high value leads to premature convergence that cannot obtain a better optimal solution. Therefore, the initial settings for and are based on experience estimation. According to Eq. (10), with respect to an arbitrary weight vector, is calculated byThe corresponding antecedent and consequent parameters of the RFCMAC model are then adjusted using the chain rule to perform the error term recursive operation. With the RFCMAC model and the cost function as defined in Eq. (10), the update rule for can be derived as whereThe equations used to update the recurrent weight parameter cell are where where represents the learning rate of the recurrent for the fuzzy weight functions and is set between 0 and 1, and denotes the error between the desired output and actual output, i.e., .and represent the mean and variance of the receptive field functions, respectively. The adjustable parameters of the receptive field functions are calculated by and where and where denotes the ’th input dimension for , and denotes the ’th hypercube cell.3.2.Weighted Strategy for Adaptively Refining the Transmission MapIn the real world, the transmission is not always constant within a window, especially around the contour of an object. In the inconstant regions, the image of the recovery scene generates some halos and block artifacts. The proposed solution is to use a pixel-window ratio (PWR) method to detect the possible regions of the halo artifact in the recovered scene and to use an adaptive weighting technique to mitigate the artifact. The PWR is defined as the ratio of the pixel itself and the mask of the window. The PWR is derived by where the numerator is the minimum channel by mask for RGB color space and the denominator is the window transmission map (WTM) by mask. A PWR value very close to 1 means that the transmission within the WTM is nearly constant. Although the halo situation cannot occur, the relative color saturation in the image is very high. In contrast, if the value of PWR is far , this means that the transmission within the window is inconstant and the halo artifact will occur. However, excessive color saturation is not a problem. Although the halo artifact region can be found by the value of PWR, the main problem is how to mitigate artifacts from these regions. The proposed solution to this problem is to use a weighted strategy approach to improve refinement of the transmission map and to mitigate the halo artifact. The weighted strategy approach is defined as follows: where and are the weighting factors for mitigating the artifacts. The range of and is set as . In Eq. (22), if the PWR value is greater than , it means that the transmission is greatly different from the WTM, the weighting estimation of the WTM is decreased, and the weighting estimation of the PTM is increased. Therefore, this situation requires a very small weighting factor to adjust the transmission rapidly so that the halo artifact can be eliminated. If the PWR value is between and , this means that the transmission is a little different from the WTM. For this situation, the weighting factor is greater than the weighting factor and it is applied to adjust the transmission smoothly. Otherwise, the WTM value is directly used as an estimation value. Parameter values and are based on computational analysis of the intensity values associated with the halos. Figure 5(a) shows the original hazy image and Figs. 5(b)–5(j) show the results using the different values of and . Based on the above computational analysis, weighting factors and are set appropriately to improve the quality of the image dehazing.3.3.Atmospheric Light EstimationThe atmospheric light factor must be carefully selected for effective image dehazing. An incorrectly selected atmospheric light factor will obtain very poor dehazing results. In some situations, many objects are considered atmospheric light, which results in erroneous image restoration. To solve this problem, the proposed solution is to use an average value of the brightest 1% in the transmission to refine the atmospheric light level. The average value is calculated as follows: where is the atmospheric light and is the color channel. Figure 6 shows the results of scene radiance recovery.3.4.Image RecoveryThis section describes how both atmospheric light and transmission features in Secs. 3.2 and 3.3 are used as input factors in scene recovery. The scene radiance recovery step converts Eq. (1) into Eq. (24) to obtain the dehazed images. Therefore, scene can be recovered as follows: where is the lower bound of transmission and is set as 0.15. If a little haze exists in the recovered image, then this image will look more natural.4.Results and DiscussionThe experiments were performed in the C language on a Pentium(R) i7-3770 CPU @3.20 GHz. The effectiveness and robustness of the proposed method were verified by testing several hazy images, namely, “New York,” “ny12,” “ny17,” “y01,” and “y16”. The proposed approach was also compared with other well-known haze removal methods.13,16,17–20,24 Performance testing was divided into three parts: (1) results of removing the halo, (2) assessment of the visual images, and (3) analysis of the quantitative measurement. 4.1.Results of Removing the HaloFigure 7 shows the results of removing the halo for different images. In Fig. 7(a), the estimated transmission map is from an input hazy image using the patch size . Although the dehazing results are good, some block effects (halo artifacts) exist in the blue blocks of Fig. 7(a). The phenomenon is because the transmission is not always a constant value in a patch. In Fig. 7(b), the halo artifacts are suppressed by the proposed method in the red blocks. Therefore, the halo artifacts do not exist using the proposed method. 4.2.Estimation of the Visual ImageFigure 8 shows the comparison results. This figure shows that the dehazing results obtained by the proposed method are better than those of Fattal,16 Tarel and Hautière,19 and Ancuti and Ancuti.24 Additionally, Schechner and Averbuch14 adopted a multi-image polarization-based dehazing method that employs the worst and the best polarization states among the existing image versions. For a comparison with the method developed in Schechner and Averbuch,14 we processed only one input used in that study.r14 The dehazing results obtained by the proposed method are superior to those of Schechner and Averbuch.14 Figures 9 and 10 also show the comparison results for the proposed approach and other state-of-the-art methods. Figure 9 shows that, compared with the techniques developed by Tan17 and by Tarel and Hautière,19 the proposed method preserves the fine transitions in the hazy regions and does not generate unpleasing artifacts. Moreover, the techniques of Tan17 and Tarel and Hautière19 produce oversaturated colors. Although the technique developed by Fattal16 obtains good dehazing results, its applications are limited in dense haze situations. The poor performance mainly results from the use of a statistical analysis method that needs to estimate the variance of the depth map. The technique of Kopf et al.13 obtains a good result in the color contrast, but only a little detailed texture is presented in the image. The technique of He et al.18 gets an obvious color difference in some regions. Recently, the technique developed by Nishino et al.20 yields aesthetically pleasing results, but some artifacts are introduced in those regions, which are considered at infinite depth. The method developed by Ancuti and Ancuti24 obtains a natural image, but color differences are visible in some regions, such as objects. The proposed method can effectively perform hazing, halation, and color cast. An image of a mountain was also used for comparison with other state-of-the-art methods. Figure 10 shows the dehazing results using various methods. Comparisons showed that the Tan method17 produces oversaturation phenomena and causes color differences and halo artifacts. A good color contrast is obtained by the Fattal16 method, but some differences in detailed textures and color differences are visible. The results of Kopf13 are similar to those of Fattal.16 Though Tarel and Hautière’s19 method has a good detailed texture, the color difference problem is generated. Because of color differences caused by oversaturation, the results obtained by the He et al.18 method are unnatural. The technique developed by Nishino et al.20 obtains a good overall image, but an unnatural phenomenon is visible in clouds in the sky. The technique of Ancuti and Ancuti24 performs well in terms of true color contrast; however, a slight unnatural phenomenon still occurs around the sky. Overall, the results obtained by the proposed method are superior to those of other methods. 4.3.Quantitative Measurement ResultsA real-world quantitative analysis of image restoration is not easy to implement because a standard reference image has not been validated. Therefore, to demonstrate the effectiveness of the proposed algorithm compared to other image dehazing methods such as Tan,17 Fattal,16 Kopf et al.,13 Tarel and Hautière,19 He et al.,18 Nishino et al.,20 and Ancuti and Ancuti,24 this study employs two well-known quantitative metrics for analysis: the indicator assessment of S-CIELAB by Zhang and Wandell56 and the blind measure by Hautière et al.57 The S-CIELAB56 metric is used to estimate color fidelity in visual images because it incorporates the spatial color sensitivity of the eye and evaluates the color contrast between the restored image and the original image. Therefore, it obtains accurate predictions. The value of the color contrast is proportional to the S-CIELAB metric. If the S-CIELAB metric is small, the color contrast value is small; in contrast, if the S-CIELAB metric is large, the color contrast value is large. Table 1 shows the estimation results of color contrast using various methods. Table 1Estimation results of color contrast using various methods.
The blind measure methodology57 calculates the ratio between the gradient of before and after image restoration. This calculation is based on the concept of visibility, which is commonly used in lighting engineering. This study considers four images for discussing, named as ny12, ny17, y01, and y16. Indicator represents edges newly visible after restoration, and indicator represents the mean ratio of the gradients at visible edges. The blind measure is calculated as follows: where and are the number of visible edges in the restored image and the original image, respectively where is the set of visible edges in the restored image, is the ’th element of the corresponding set , and denotes the ’th ratio between the gradient of the original image and the restored image.Table 2 shows the performance of different algorithms with and . In Table 2, the edge newly visible after restoration (i.e., the value) of the proposed method is larger than those of other methods,13,16–18,20 whereas the value of the proposed method is smaller than that in the Tan17 and Tarel and Hautière19 methods. However, comparisons of the visual images show that both methods (i.e., Refs. 17 and 19) exhibit oversaturation and color contrast. Table 2Performance of different algorithms with e and r¯.
The computation time of the proposed method was also compared with that of other state-of-the-art techniques. For this comparison, test images with an average size of were used. The comparisons showed that the proposed method requires 4.5 s, the method of Tan17 needs , Fattal16 requires 35 s, the technique of Tarel and Hautière19 needs 8 s, and He et al.18 requires 20 s. Therefore, the proposed method has the shortest computation time. Based on the above-mentioned analysis and comparison in Secs. 4.1–4.3, an efficient hybrid of the RFCMAC model and the weighted strategy is proposed for solving halo removal, color contrast enhancement, and computation time reduction. 5.ConclusionsThe hybrid RFCMAC model and weighted strategy developed in this study effectively solve hazy and foggy images. The proposed RFCMAC model performs estimation of the transmission map and accurately selects the average of the brightest 1% of atmospheric light. An adaptively weighted strategy is applied to generate a refined transmission map for removing the halo effect. Experimental results demonstrate the superiority of the proposed method in enhancing color contrast, balancing color saturation, removing halo artifacts, and reducing computation time. ReferencesJ. A. Stark,
“Adaptive image contrast enhancement using generalizations of histogram equalization,”
IEEE Trans. Image Process., 9
(5), 889
–896
(2000). http://dx.doi.org/10.1109/83.841534 IIPRE4 1057-7149 Google Scholar
Z. Rahman, D. J. Jobson and G. A. Woodell,
“Retinex processing for automatic image enhancement,”
J. Electron. Imaging, 13
(1), 100
–110
(2004). http://dx.doi.org/10.1117/1.1636183 Google Scholar
P. Scheunders,
“A multivalued image wavelet representation based on multiscale fundamental forms,”
IEEE Trans. Image Process., 11
(5), 568
–575
(2002). http://dx.doi.org/10.1109/TIP.2002.1006403 IIPRE4 1057-7149 Google Scholar
C. O. Ancuti et al.,
“A fast semi-inverse approach to detect and remove the haze from a single image,”
in Proc. of the Asian Conf. on Computer Vision,
501
–514
(2010). Google Scholar
J. P. Oakley and B. L. Satherley,
“Improving image quality in poor visibility conditions using a physical model for contrast degradation,”
IEEE Trans. Image Process., 7
(2), 167
–179
(1998). http://dx.doi.org/10.1109/83.660994 IIPRE4 1057-7149 Google Scholar
K. K. Tan and J. P. Oakley,
“Physics-based approach to color image enhancement in poor visibility conditions,”
J. Opt. Soc. Am. A, 18
(10), 2460
–2467
(2001). http://dx.doi.org/10.1364/JOSAA.18.002460 Google Scholar
S. G. Narasimhan and S. K. Nayar,
“Contrast restoration of weather degraded images,”
IEEE Trans. Pattern Anal. Mach. Intell., 25
(6), 713
–724
(2003). http://dx.doi.org/10.1109/TPAMI.2003.1201821 ITPIDJ 0162-8828 Google Scholar
Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar,
“Polarization based vision through haze,”
Appl. Opt., 42
(3), 511
–525
(2003). http://dx.doi.org/10.1364/AO.42.000511 APOPAI 0003-6935 Google Scholar
P. S. Pandian, M. Kumaravel and M. Singh,
“Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation,”
Med. Biol. Eng. Comput., 47
(11), 1197
–1206
(2009). http://dx.doi.org/10.1007/s11517-009-0531-3 MBECDY 0140-0118 Google Scholar
S. Shwartz, E. Namer and Y. Schechner,
“Blind haze separation,”
in 2006 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’06),
1984
–1991
(2006). http://dx.doi.org/10.1109/CVPR.2006.71 Google Scholar
Y. Schechner, S. Narasimhan and S. Nayar,
“Instant dehazing of images using polarization,”
in Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ’01),
325
–332
(2001). http://dx.doi.org/10.1109/CVPR.2001.990493 Google Scholar
N. Hautière, J. P. Tarel and D. Aubert,
“Towards fog-free in-vehicle vision systems through contrast restoration,”
in IEEE Conf. on Computer Vision and Pattern Recognition,
1
–8
(2007). http://dx.doi.org/10.1109/CVPR.2007.383259 Google Scholar
J. Kopf et al.,
“Deep photo: model-based photograph enhancement and viewing,”
ACM Trans. Graph., 27
(5), 1
–10
(2008). http://dx.doi.org/10.1145/1409060 ATGRDF 0730-0301 Google Scholar
Y. Schechner and Y. Averbuch,
“Regularized image recovery in scattering media,”
IEEE Trans. Pattern Anal. Mach. Intell., 29
(9), 1655
–1660
(2007). http://dx.doi.org/10.1109/TPAMI.2007.1141 ITPIDJ 0162-8828 Google Scholar
E. Namer, S. Shwartz and Y. Schechner,
“Skyless polarimetric calibration and visibility enhancement,”
Opt. Express, 17
(2), 472
–493
(2009). http://dx.doi.org/10.1364/OE.17.000472 OPEXFF 1094-4087 Google Scholar
R. Fattal,
“Single image dehazing,”
ACM Trans. Graph., 27
(3),
(2008). http://dx.doi.org/10.1145/1360612.1360671 Google Scholar
R. T. Tan,
“Visibility in bad weather from a single image,”
in Proc. IEEE Conf. Computer Vision and Pattern Recognition,
1
–8
(2008). http://dx.doi.org/10.1109/CVPR.2008.4587643 Google Scholar
K. He, J. Sun and X. Tang,
“Single image haze removal using dark channel prior,”
in Proc. IEEE Conf. Computer Vision and Pattern Recognition,
1956
–1963
(2009). http://dx.doi.org/10.1109/CVPR.2009.5206515 Google Scholar
J. P. Tarel and N. Hautiere,
“Fast visibility restoration from a single color or gray level image,”
in Proc. IEEE Int. Conf. Computer Vision,
2201
–2208
(2009). http://dx.doi.org/10.1109/ICCV.2009.5459251 Google Scholar
K. Nishino, L. Kratz and S. Lombardi,
“Bayesian defogging,”
Int. J. Comput. Vision, 98
(3), 263
–278
(2012). http://dx.doi.org/10.1007/s11263-011-0508-1 IJCVEQ 0920-5691 Google Scholar
A. Levin, D. Lischinski and Y. Weiss,
“A closed form solution to natural image matting,”
IEEE Trans. Pattern Anal. Mach. Intell., 30
(2), 228
–242
(2008). http://dx.doi.org/10.1109/TPAMI.2007.1177 ITPIDJ 0162-8828 Google Scholar
K. Gibson and T. Nguyen,
“An analysis of single image defogging methods using a color ellipsoid framework,”
EURASIP J. Image Video Process., 2013
(37),
(2013). http://dx.doi.org/10.1186/1687-5281-2013-37 Google Scholar
R. Fattal,
“Dehazing using color-lines,”
ACM Trans. Graph., 34
(1), 13
(2014). http://dx.doi.org/10.1145/2651362 ATGRDF 0730-0301 Google Scholar
C. O. Ancuti and C. Ancuti,
“Single image dehazing by multi-scale fusion,”
IEEE Trans. Image Process., 22
(8), 3271
–3282
(2013). http://dx.doi.org/10.1109/TIP.2013.2262284 IIPRE4 1057-7149 Google Scholar
C. Xianzhong and K. G. Shin,
“Direct control and coordination using neural networks,”
IEEE Trans. Syst., Man, Cybern., 23
(3), 686
–697
(1993). http://dx.doi.org/10.1109/21.256542 ISYMAW 0018-9472 Google Scholar
S. Wu and K. Y. M. Wong,
“Dynamic overload control for distributed call processors using the neural network method,”
IEEE Trans. Neural Networks, 9
(6), 1377
–1387
(1998). http://dx.doi.org/10.1109/72.728389 ITNNEP 1045-9227 Google Scholar
T. Yamada and T. Yabuta,
“Dynamic system identification using neural networks,”
IEEE Trans. Syst., Man, Cybern., 23
(1), 204
–211
(1993). http://dx.doi.org/10.1109/21.214778 Google Scholar
S. Lu and T. Basar,
“Robust nonlinear system identification using neural-network models,”
IEEE Trans. Neural Networks, 9
(3), 407
–429
(1998). http://dx.doi.org/10.1109/72.668883 ITNNEP 1045-9227 Google Scholar
C.A. Perez et al.,
“Linear versus nonlinear neural modeling for 2-D pattern recognition,”
IEEE Trans. Syst., Man, Cybern. A, 35
(6), 955
–964
(2005). http://dx.doi.org/10.1109/TSMCA.2005.851268 Google Scholar
T. H. Oong and N. A. M. Isa,
“Adaptive evolutionary artificial neural networks for pattern classification,”
IEEE Trans. Neural Networks, 22
(11), 1823
–1836
(2011). http://dx.doi.org/10.1109/TNN.2011.2169426 ITNNEP 1045-9227 Google Scholar
S. K. Nair and J. Moon,
“Data storage channel equalization using neural networks,”
IEEE Trans. Neural Networks, 8
(5), 1037
–1048
(1997). http://dx.doi.org/10.1109/72.623206 ITNNEP 1045-9227 Google Scholar
C. You and D. Hong,
“Nonlinear blind equalization schemes using complex-valued multilayer feedforward neural networks,”
IEEE Trans. Neural Networks, 9
(6), 1442
–1455
(1998). http://dx.doi.org/10.1109/72.728394 ITNNEP 1045-9227 Google Scholar
Y. S. Yang et al.,
“Automatic identification of human helminth eggs on microscopic fecal specimens using digital image processing and an artificial neural network,”
IEEE Trans. Biomed. Eng., 48
(6), 718
–730
(2001). http://dx.doi.org/10.1109/10.923789 IEBEAX 0018-9294 Google Scholar
L. Ma and K. Khorasani,
“Facial expression recognition using constructive feedforward neural networks,”
IEEE Trans. Syst., Man, Cybern. B, 34
(3), 1588
–1595
(2004). http://dx.doi.org/10.1109/TSMCB.2004.825930 Google Scholar
J. S. Albus,
“A new approach to manipulator control: the cerebellar model articulation controller (CMAC),”
J. Dyn. Syst., Meas., Contr., 97
(3), 220
–227
(1975). http://dx.doi.org/10.1115/1.3426922 Google Scholar
J. S. Albus,
“Data storage in the cerebellar model articulation controller (CMAC),”
J. Dyn. Syst., Meas., Contr., 97 228
–233
(1975). http://dx.doi.org/10.1115/1.3426923 Google Scholar
Z. J. Lee, Y. P. Wang and S. F. Su,
“A genetic algorithm based robust learning credit assignment cerebellar model articulation controller,”
Appl. Soft Comput., 4
(4), 357
–367
(2004). http://dx.doi.org/10.1016/j.asoc.2004.01.007 Google Scholar
Y. G. Leu et al.,
“Compact cerebellar model articulation controller for ultrasonic motors,”
Int. J. Innovative Comput., Inf. Control, 6
(12), 5539
–5552
(2010). Google Scholar
S. F. Su, T. Ted and T. H. Huang,
“Credit assigned CMAC and its application to online learning robust controllers,”
IEEE Trans. Syst., Man, Cybern., B, 33
(2), 202
–213
(2003). http://dx.doi.org/10.1109/TSMCB.2003.810447 Google Scholar
J. Wu and F. Pratt,
“Self-organizing CMAC neural networks and adaptive dynamic control,”
in Proc. of the 1999 IEEE Int. Symp. on Intelligent Control/Intelligent Systems and Semiotics,
259
–265
(1999). http://dx.doi.org/10.1109/ISIC.1999.796665 Google Scholar
S. Commuri and F. L. Lewis,
“CMAC neural networks for control of nonlinear dynamical systems: structure, stability, and passivity,”
Automatics, 33
(4), 635
–641
(1997). http://dx.doi.org/10.1016/S0005-1098(96)00180-X Google Scholar
K. S. Hwang and C. S. Lin,
“Smooth trajectory tracking of three-link robot: a self-organizing CMAC approach,”
IEEE Trans. Syst., Man, Cybern. B, 28
(5), 680
–692
(1998). http://dx.doi.org/10.1109/3477.718518 Google Scholar
H. M. Lee, C. M. Chen and Y. F. Lu,
“A self-organizing HCMAC neural-network classifier,”
IEEE Trans. Neural Networks, 14
(1), 15
–27
(2003). http://dx.doi.org/10.1109/TNN.2002.806607 ITNNEP 1045-9227 Google Scholar
C. C. Jou,
“A fuzzy cerebellar model articulation controller,”
in Proc. IEEE Int. Conf. Fuzzy System,
1171
–1178
(1992). http://dx.doi.org/10.1109/FUZZY.1992.258722 Google Scholar
S. H. Lane and J. Militzer,
“A comparison of five algorithm for the training of CMAC memories for learning control systems,”
Automatica, 28
(5), 1027
–1035
(1992). http://dx.doi.org/10.1016/0005-1098(92)90158-C ATCAA9 0005-1098 Google Scholar
C. S. Lin and C. K. Li,
“A new neural network structure composed of small CMACs,”
in Proc. IEEE Conf. Neural Systems,
1777
–1783
(1996). http://dx.doi.org/10.1109/ICNN.1996.549170 Google Scholar
D. S. Reay,
“CMAC and B-spline neural networks applied to switched reluctance motor torque estimation and control,”
in The 29th Annual Conf. of the IEEE Industrial Electronics Society,
323
–328
(2003). http://dx.doi.org/10.1109/IECON.2003.1280001 Google Scholar
S. Chen and D. Zhangm,
“Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure,”
IEEE Trans. Syst., Man, Cybern. B, 34
(4), 1907
–1916
(2004). http://dx.doi.org/10.1109/TSMCB.2004.831165 Google Scholar
S. F. Su, Z. J. Lee and Y. P. Wang,
“Robust and fast learning for fuzzy cerebellar model articulation controllers,”
IEEE Trans. Syst., Man, Cybern. B, 36
(1), 203
–208
(2006). http://dx.doi.org/10.1109/TSMCB.2005.855570 Google Scholar
T. F. Wu, P. S. Tsai and L. S. Wang,
“Adaptive fuzzy CMAC control for a class of nonlinear systems with smooth compensation,”
IEE Proc. Control Theory Appl., 153
(6), 647
–657
(2006). http://dx.doi.org/10.1049/ip-cta:20050362 Google Scholar
Y. F. Peng and C. M. Lin,
“Intelligent hybrid control for uncertain nonlinear systems using a recurrent cerebellar model articulation controller,”
IEE Proc. Control Theory Appl., 151
(5), 589
–600
(2004). http://dx.doi.org/10.1049/ip-cta:20040903 Google Scholar
J. B. Theocharis,
“A high-order recurrent neuro-fuzzy system with internal dynamics: application to the adaptive noise cancellation,”
Fuzzy Sets Syst., 157
(4), 471
–500
(2006). http://dx.doi.org/10.1016/j.fss.2005.07.008 FSSYD8 0165-0114 Google Scholar
D. G. Stavrakoudis and J. B. Theocharis,
“A recurrent fuzzy neural network for adaptive speech prediction,”
in Proc. IEEE Int. Conf. on Systems, Man and Cybernetics,
2056
–2061
(2007). http://dx.doi.org/10.1109/ICSMC.2007.4414191 Google Scholar
H. Koschmieder,
“Theorie der horizontalen sichtweite,”
Beitrage zur Physik der Freien Atmosphare, Keim & Nemnich, Munich, Germany
(1924). Google Scholar
E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules and Particles, Wiley, New York, NY
(1976). Google Scholar
X. Zhang and B. A. Wandell,
“Color image fidelity metrics evaluated using image distortion maps,”
Signal Process., 70
(3), 201
–214
(1998). http://dx.doi.org/10.1016/S0165-1684(98)00125-X SPRODR 0165-1684 Google Scholar
N. Hautiere et al.,
“Blind contrast restoration assessment by gradient ratioing at visible edges,”
Image Anal. Stereol., 27
(2), 87
–95
(2008). http://dx.doi.org/10.5566/ias.v27.p87-95 Google Scholar
BiographyJyun-Guo Wang received his MS degree in computer science and information engineering from Chaoyang University of Technology, Taichung, Taiwan, in 2007. He is currently a PhD candidate in the Institute of Computer and Communication Engineering, Department of Electrical Engineering in National Cheng Kung University. His research interests are in the areas of neural networks, fuzzy systems, and image processing. Shen-Chuan Tai received his BS and MS degrees in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1982 and 1986, respectively, and his PhD in computer science from the National Tsing Hua University, Hsinchu, Taiwan, in 1989. He is currently a professor in the Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan. His teaching and research interests include data compression, DSP, VLSI array processors, computerized electrocardiogram processing, and multimedia systems. Cheng-Jian Lin received the PhD in electrical and control engineering from the National Chiao-Tung University, Hsinchu, Taiwan, in 1996. Currently, he is a distinguished professor in the Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung, Taiwan. His current research interests include soft computing, pattern recognition, intelligent control, image processing, bioinformatics, and Android/iPhone program design. |