Open Access
26 September 2023 Review and evaluation of recent advancements in image dehazing techniques for vision improvement and visualization
Sahadeb Shit, Dip Narayan Ray
Author Affiliations +
Abstract

Vision gets obscured in adverse weather conditions, such as heavy downpours, dense fog, haze, snowfall, etc., which increase the number of road accidents yearly. Modern methodologies are being developed at various academics and laboratories to enhance visibility in such adverse weather with the help of technologies. We review different dehazing techniques in many applications, such as outdoor surveillance, underwater navigation, intelligent transportation systems, object detection, etc. Dehazing is achieved in four primary steps: the capture of hazy images, estimation of atmospheric light with transmission map, image enhancement, and restoration. These four dehazing procedures allow for a step-by-step method for resolving the complicated haze removal issue. Furthermore, it also explores the limitations of existing deep learning-based methods with the available datasets and the challenges of the algorithms for enhancing visibility in adverse weather. Reviewed techniques reveal gaps in the case of remote sensing, satellite, and telescopic imaging. In the experimental analysis of various image dehazing approaches, one can learn the effectiveness of each phase in the image dehazing process and create more effective dehazing techniques.

1.

Introduction

Vision through adverse weather is the most important issue to be resolved for any vision-based application, be it transportation system, navigation, or surveillance. During winter, land transport faces a tremendous problem of low visibility due to fog, and it affects the daily lives of people.1 During low visibility, drivers rely mainly on the headlight of the vehicle. But during unfavorable weather (rain, haze, snowfalls, and fog), the headlight fails to enhance the clarity of visibility due to the scattering of light by the precipitation. Precipitations scatter the light across a wide range of angles, and this disturbs the vision of the driver.13 As a result, accidents happen, sometimes causing loss of life. Hence, there is a compelling demand for vision algorithms capable of maintaining robust performance in challenging real-world scenarios characterized by adverse weather and varying lighting conditions.

1.1.

Constituents for Adverse Weather

The knowledge about the constituents of adverse weather, their formation, and concentrations are essential to carrying out research in the domain.25 The major constituents are air, haze, fog, cloud, and rain. The weather conditions differ mainly due to particle size, type, and concentration in the space. Particle size and concentration are the two most important parameters that affect the variation in weather conditions. Particles with larger size and lesser concentration may lead to similar conditions caused due to smaller size and higher concentration. The weather conditions can degrade or fluctuate because of the larger sizes of particles present in the atmosphere, as presented in Table 1.

Table 1

Adverse weather, as well as the varieties and concentrations of particles associated.

ConditionConcentration (cm−3)Radius (μm)Particle type
Rain102 to 105102 to 104Water drop
Cloud300 to 101 to 10Water droplet
Fog100 to 101 to 10Water droplet
Haze103 to 10102 to 1Aerosol

Haze is responsible for producing a dark or pale blue tint, which has an impact on visibility. It is a composition of smoke and road–dust mixed in a scattered manner from set sources, including plant hydrolysis, volcanic dust, sea salt, and combustible materials.2 The condensation of water vapor forms fog into tiny water droplets, which remain suspended in the air.3,4 The distinguishing features between the cloud and the fog can be accurately observed at more significant elevations rather than at ground level. The majority of the clouds are composed of haze-like water crystals, whereas others are formed of lengthy snow chunks and glacier dust particles.5 Rain is the form of water droplets that condense from the atmospheric water vapor and then become heavy enough to fall on the earth’s surface under gravity. Optical features of the weather particles make irregular spatial and temporal changes in the images.6 But this change is challenging to analyze in case of heavy rain and snowfall.

1.2.

Mathematical Expression of Atmospheric Scattering for the Creation of Haze

Clear vision is dependent on two important factors contrast and brightness. Airlight and attenuation are the resulting phenomena due to the scattering of atmospheric particles. Airlight affects the brightness of the scene. Attenuation diminishes the color contrast of the region of interest. The inverse relationship between the airlight and attenuation can provide a theoretical basis for degradation mechanisms for hazy images.2 That is why vision in adverse weather should be described using the airlight model [Fig. 1(a)] and the light attenuation model. Scattering is the reflection of electromagnetic wave/energy by small particles suspended in a medium of different refractive indexes.2,4,5 There are three types of scattering, as described in Table 2. The ratio of particle diameter (D) to light wavelength (λ) determines the type and efficiency of scattering.4

Fig. 1

(a) Light diffraction from a source to a camera several. (b) An illuminated and measured unit area of randomly oriented colloidal matter.

JEI_32_5_050901_f001.png

Table 2

Different types of scattering.

Visible light scattering λ = wavelength of light, D = particle diameter
TypesD (μm)ParticlesD/λ
Rayleigh0.0001 to 0.001Air molecules<1
Mie0.01 to 1.0Aerosols≈1
Geometric10 to 100Cloud droplets>1

Figure 1(b) can be observed for an overview of simple illumination and detection geometry. The usage of different suspended particles in a scattering medium is exposed by spectral irradiance E(λ) per cross-sectional area. In the direction of θ is the observer, the radiant intensity I(θ,λ) of the unit volume equals to

Eq. (1)

I(θ,λ)=β(θ,λ)E(λ).

The radiant intensity I(θ,λ) is the flux radiated per unit of solid angle, per unit volume of the medium. Where β(θ,λ) is the angular scattering coefficient,2 and E(λ) denotes the spectral irradiance per cross-sectional area.2 The total flux interacting over the scattered shape can be described as

Eq. (2)

θ(λ)=β(λ)E(λ).

Airlight is a phenomenon caused by the effect of scattering of environmental illumination by the particles suspended in the atmosphere when the atmosphere acts as a light source.7 Environmental illumination can come from a variety of sources, including diffused skylight, direct sunshine, and light reflected from the surface. Airlight increases the apparent brightness of a scene point with depth. Attenuation refers to the change in the brightness of a light source as the distance increases through a transmission medium.4,6,8 The cross-sectional view of a collimated beam of light that incidents on the atmospheric medium passes through a very thin sheet with a thickness of dx. The fractional change in irradiance at x can be calculated as follows:

Eq. (3)

dE(x,λ)E(x,λ)=β(λ)dx

Between the boundaries X=0 and X=d, integrated both sides

Eq. (4)

E(d,λ)=E0(λ)e0dβ(λ)dx,
where E0(λ) is the irradiance at the source X=0. This is Bouguer’s attenuation exponential law.2 Attenuation owing to scattering is sometimes described in terms of T=0dβ(λ)dx is the optical thickness. It is commonly assumed that the coefficient β(λ) is constant for horizontal paths. The scattering coefficient is independent of distance in this situation; thus, the attenuation law can be modified as follows:

Eq. (5)

E(d,λ)=eβ(λ)dd2I0(λ).

The radiant intensity4,5 of the point sources is I0(λ). All scattered flux has been considered to be extracted from the absorption coefficient. A transmission rate is the amount of energy that is represented by Eq. (5).

Hence, the mitigation of haze in images requires the estimation of an airlight map, as highlighted.25,8 Airlight estimation is essential for determining the depth information within hazy images, relying on scene-specific characteristics. The initial stage of any haze removal method typically involves contrast enhancement and image restoration.9,10 The second category of image dehazing methods leverages conventional and non-learning-based image degradation processes for depth information.9,11 These approaches have captured multiple real-time images under different weather conditions of the same scene to enhance the actual image by reversing the degradation process.913

This paper presents a review of the prior research on various techniques of image enhancement and restoration employed to ensure visual clarity under hazy conditions. Apart from this, the article also reviews various sensor-based atmospheric scattering models of a hazy image. In addition, different learning-based methodologies have been assessed based on their outcomes and thoroughly examined by different haze-removal datasets.

The significant contributions of the review paper made the following:

  • 1. The article provides an in-depth review of over 150 recent papers, highlighting their contributions. This comprehensive review is valuable for new researchers in the allied field, as it will allow them to gain a rapid understanding of the historical context, current advancements, and potential scope of future research in the domain of real-time image dehazing.

  • 2. The paper reviews conventional methods with advantages and limitations based on the transmission parameter estimation for single and multiple-image dehazing.

  • 3. A comprehensive review of the different learning-based image dehazing techniques is also conducted toward proper visualization of non-homogeneous real-time images.

  • 4. Experimental evaluations have been conducted on the different dehazing datasets to compare the different state-of-the-art models on real-time foggy images with their advantages and limitations.

The current article is organized as follows; Sec. 2 elaborates on the different aspects of image dehazing and a basic framework of haze removal. The complete review and mathematical model of haze removal techniques with their limitations are discussed in Sec. 3. An experimental evaluation of different image dehazing techniques on benchmarking datasets are provided in Sec. 4. Finally, challenges and future research trends are discussed in Secs. 5 and 6, respectively.

2.

Aspects of Image Dehazing

Outside images are generally susceptible to different atmospheric conditions, especially haze, fog, and heavy rainfall. The resulting images generated by these atmospheric conditions are the images with low contrast, distorted colors, and reduced original scenes. Image enhancement with depth map estimation9 is a very active research area to provide the basic framework for haze removal, as shown in Fig. 2. This depth map estimation can be derived in terms of an airlight map and transmission map. Some methods have estimated the depth of information from the scene properties, as shown in Fig. 2. The scene properties can be shading function or contrast-based cost function. Once depth information is estimated, it is easier to restore the image using the fog model.9

Fig. 2

A basic framework for haze removal.

JEI_32_5_050901_f002.png

The first stage in any haze removal process is to capture a picture from the real world. The camera and image sensors are used to capture this picture. The acquisition procedure takes advantage of the sensor plane.1 Several sensors have been used along with the camera modules for improving visibility, as shown in Fig. 3, and environmental models due to adverse weather conditions.5 Reputed automotive navigation systems mainly depend on a large number of different sensors to increase visibility. In adverse weather conditions, sensors usually provide reliable scanned data that can be fed to vision-based algorithms for object detection, depth estimation, or semantic scene understanding in order to improve safety and avoid accidents. Dannheim et al.7 used LiDAR technology to enhance visibility in different weather conditions. This technology is highly sensitive to light and has a perfect solution for providing robust information to the command station for controlling an autonomous vehicle in adverse weather. LiDAR and IR cameras first acquire the data from the environment with adverse weather. After that, apply sensor fusion technology for clear vision in adverse weather conditions. Dong et al.6 proposed a methodology that has a huge positive impact on advanced driver-assistant systems and autonomous navigation systems. The proposed method used the extended Kalman filter for detection and tracking obstacles using sensor fusion technology. Rablau et al.14 introduced an image-processing technique to detect vehicles in a foggy environment for collision avoidance. LiDAR and IR cameras have been used to increase the performance ratio. Image frames from the camera have been captured and retrieved from adverse weather.

Fig. 3

Visual quality of basic framework in image restoration methods: (a) real-time haze image, (b) estimate transmission, (c) depth map, and (d) dehaze image.

JEI_32_5_050901_f003.png

Image enhancement has been performed on hazy images using the adaptive Gaussian filtering technique. This process yields a clear new image by changing the threshold values. Loce et al.15 proposed a sensor fusion technology that can improve the sensor’s performance and efficiency by optimizing the mathematical error of the sensor readings. The fusion of multiple-sensor data always provides improved accuracy than single-sensor data in the case of controlling an autonomous vehicle. Rasshofer et al.16 proposed a multisensory mechanism for autonomous driving assistance systems. Laser, radar, and lidar have been used for the present mechanism to obtain a more accurate distance. The RF-based signal transmission and reception operate these types of long-range finders. Transmission models are applied for signal transmission in the new model-based sensor system. Pinchon et al.17 presented a comparative study on vision systems, traffic signal control, and lane detection for safe autonomous driving. The authors have tried to draw attention to visualization through camera and distance measurement using sensors (such as Lidar, radar, etc.) for experiments in adverse weather conditions. The sensors mentioned in Table 3 are used to get the values of the standard parameters for constructing an environmental model that enhances visibility in degraded weather conditions and climate factors, such as fog, rain, and snow significant impact. In adverse weather conditions, the sensor data provide a quick interpretation of an environmental condition to the drivers respond in a variety of ways, including accelerating and operating in the proper lane, among other things. The challenge lies in the interpretation of sensor data, minimization of the response time of the driver, and providing a precise view of the road rather than simply installing them.

Table 3

Application of some environment monitoring sensors.

Type of sensor (non-intrusive)Application
RadarEstimation of the vehicle volume, speed, and identification of heading of vehicle17
LidarNegative obstacle detection, quick identification, and avoiding high-risk terrains and obstacles6,8
Infrared sensorSpeed, vehicle distance measurement7
CCD/CMOS camera RFID (radio-frequency identification)Vehicle detection over multiple lanes, with the ability to classify vehicles by length, occupancy, and velocity for each class7 used to track vehicles mainly for object identification8

After the image acquisition, the atmospheric scattering model is commonly applied in image processing and computer vision to characterize the creation of a hazy image, as shown in Fig. 1(a). The mathematical model of the haze component is given as

Eq. (6)

xA(B)=Z(B)T(B)+KA(1T(B)),
where the color index of the RGB channel is A, KA indicates atmospheric light, ZA is a color image without haze, T is the transmission medium, ZA(B)T(B) gives the attenuation constraint, KA(1T(B)) is the airlight, d(B) is unknown depth information

Eq. (7)

T(B)=eαd(B),

Eq. (8)

ZA(B)=xA(B)+(1T(B)1)(xA(B)KA),
where (1T(B)1) refers to the transmission factor and (xA(B)KA) refers to the detail layer.

3.

Haze Removal Methods

This section provides a thorough analysis of the most effective haze reduction methods. The classifications of the haze removal methods are shown in Fig. 4. Different approaches have been utilized for image dehazing, where the dehazing algorithm is employed to determine an estimate of the scene depth and quantify haze thickness.1820 Single and multiple image dehazing are the two main categories under which image dehazing is divided, as shown in Fig. 4. Two or more frames are utilized in multiple image dehazing to estimate scene depth and other parameters of images taken in various environmental conditions.15 The most difficult thing is to recover the depth from a single image and present the single image dehazing.

Fig. 4

Classification of different image dehazing techniques.

JEI_32_5_050901_f004.png

3.1.

Conventional Single and Multiple-Image Dehazing Methods

Dehazing techniques are designed to deal with the problems faced by the transport industries so that accidents can be avoided. In this sense, a system can be designed to fulfil various kinds of traffic management and reduce accidents. The correlation between airlight and direct attenuation models has offered numerous conventional techniques; nevertheless, the primary problem with most of them is the requirement for multiple images of the same scene. It is necessary to find recent research trends on which new improvements will be incorporated as shown in Tables 4Table 56. Nayar et al.2 described the development of vision systems for outdoor applications. The knowledge of atmospheric optics helped to make the ideas about the size of atmospheric particles and their scattering model. Components of light scattering, absorption, and radiation are the three basic categories in which they can be classified. Based on that classification, the properties of the atmospheric conditions are measured (such as color, intensity, brightness). The algorithm has introduced a model for object (in the scene) detection in adverse weather without making assumptions about different atmospheric conditions. The proposed model is based on the dichromatic atmospheric scattering model. It has produced fog removal and depth estimation techniques by determining spectral distribution. The final spectral distribution received by the observer has been estimated from the summation of the airlight and direct transmission light.

Table 4

Comparison of various restoration-based methods and extracting techniques for haze removal.

Methodology/techniqueResults/outcomes and limitationsExtracting techniques
Analyze an image decomposition technique depends on low frequency, comprehension learning with sparse coding and high frequency via a bilateral filter21Removal of some of the image details and time complexityHigh frequency, low frequency, sparse coding, dictionary learning, bilateral filter
Refined guidance image22Detection and removal of some of the raindrops and blue of edgesGuided filter
Developing a framework based on MCA by applying a conventional image technique. A bilateral filter and dictionary learning with sparse coding are used to deconstruct a scene into low and high-frequency components23Long execution time and pixel loss of the original imageSparse coding, dictionary learning, dictionary partitioning process
Dictionary for the high-frequency component using sparse coding in the learning-based framework24Still lost details and complexity timeDictionary learning
A guided filter without pixel-based statistical information for haze removal25Remove raindrops with blue images and edges, has less performance, and losing the color informationGuided filter
Guided image filter, dictionary learning, and sparse coding26Lost some of the image details and color distortion problemsHigh frequency, low frequency, sparse coding, dictionary learning, guided filter
Adaptive non-local means filter27It does not eliminate all droplets, and some image features are lostNon-local filter
Neural network28It is incapable of dealing with huge droplets or rainfall.High-performance, low-power neural network
The guided filter uses a low frequency for a single picture and a high frequency as the input image29Lost some particulars and did not preserve the edgesHigh frequency, low frequency, guided filter
Method based on incremental dictionary learning30Time-consuming and some pixel information are omittedDictionary learning
Blending saturation and visibility, a direction filter, and a high-pass filter are all part of the framework31Remove some of the information and use the majority of processing timeHigh frequency
High- and low-frequency dictionary learning with sparse coding using a conceptual model image filter32Extensive execution time and the omission of some aspectsHigh frequency, low frequency, sparse coding, and dictionary learning

Table 5

Comparison of various restoration-based non-learning methods for fog removal.

Number of input imagesMethodOutcomes and limitations
Single imagesPolarization based18The radiance from the landscapes is generally unpolarized
Retinex theory19These methods do not always maintain good color fidelity
Restoration algorithm33When there are discontinuities in the scene depth, the restored image quality is not as good
Interactive11These interactive methods are practically not applicable for images where there is no depth information available
A cost function based on the human visual model34Regions are segmented uniformly to estimate the regional contribution of airlight. This algorithm gives better results but fails to cover a wide range of scene depth
Based on blackbody theory and graph-based image segmentation35Segmentation of the control parameter is difficult for foggy images
DCP and soft matting36The algorithm’s shortcoming is that when the scene objects are luminous, similar to the ambient light, the method’s core precepts are no longer true
Interactive37The algorithm is based on the use of a 3D model
Anisotropic diffusion38The proposed method can be applied for color as well as gray foggy images, hence does not require intervention
Independent component analysis39This method is based on color information but cannot be applied to the gray image because the foggy image is colorless
Linear operation40This method requires many parameters for adjustment
Iterative bilateral filter41This method gives good results, but the iterative process is slow
Multiple imagesPolarization based42Based on the effects of scattering on light polarization, however, in scenes with intense fog, this strategy does not provide an adequate improvement
Knowledge of scene depth20These enhancement-based methods can be used locally, but low spatial frequencies will be lost
Polarization43This method works instantly without relying on changes in weather conditions
Uniform bad weather conditions44Weather conditions give easy limitations for detecting scene depth discontinuities and computing scene structure

Table 6

Techniques adopted for non-learning based different haze removal with applications.

MethodsAdvantages/limitationsApplications
Multiscale tone method45For excessively fuzzy images, the approach does not produce reliable resultsOutdoor images
Wavelet transform46Efficient noise suppressionNatural images
Fast wavelet transform technique47The fog was completely erased, and the image was sharpened at the same timeNatural images
The median filter and gamma correction48This method takes less time to calculate and improves visual quality, but it does not maintain the corners in the reconstructed imagesOutdoor images
Guided joint bilateral filter49This method has also suffered from halo artifactUnderwater image enhancement
Weighted guided image filtering50Compared to previous techniques, this process takes the least amount of time to calculate and improves the brightness of the acquired imagesUnderwater images
Image fusion technique with boost filtering51This procedure improves the visual clarity of the scene while also increasing the efficiency of the execution timesOutdoor images
The variational method based on DCP52The method of dealing with the sky region is not good enoughOutdoor images
Enhanced variational image dehazing53Under very vast uniform areas, chromatic artifact can still be seenNatural images
Meta-heuristic method based on genetic algorithm54Although the evolutionary algorithm achieves the best dehazing parameters, it does not assure a perfect resultNatural images

Narasimhan et al.3 developed a system to enhance vision in bad weather. This system becomes robust by incorporating the knowledge of the atmospheric scattering model and the size of the atmospheric particles (haze, dust, rain fog, etc.). The color model has been used to change with the scattered light in the atmosphere. Authors have improved the dichromatic model to make it capable even when scene color varies due to different but unknown atmospheric conditions.2 The color of airlight has been calculated by averaging a dark object on a hazy day or foggy or from scene sites with a black direct transmission color.3 Proposed methods have been applied to photos of the site taken under extreme weather conditions. The structure of the arbitrary scene has been constructed from two unknown adverse weather conditions with two different horizon brightness values.

Cozman et al.4 proposed to use depth from scattering methods to make it applicable for both indoor and outdoor environments. Attenuation of power and sky intensity are the two most dependable phenomena for atmospheric scattering. Due to the linear characteristics of the light propagation, the combined effects of scattering are used to measure the object intensity. This method provides better results but fails to cover the outdoor environment of larger dimensions.

Sun et al.55 tried to draw the researcher’s attention to a survey of vision-based vehicle detection for transportation systems using optical sensors. According to this survey, the use of optical sensors for on-road vehicle detection can reduce injury and death statistics in cases of vehicle crashes. Singh et al.56 used the image-restoring technique for a vision system for different adverse weather conditions. This is applicable to image extraction of outdoor transportation systems, object tracking, and detection. Samer et al.57 described detecting rain streaks and restoring an image from the camera using a bilateral filter, guided filter, and morphological component analysis (MCA),23 as summarized in Table 4.

Zheng et al.10 pointed out clear weather as an important condition for navigation and tracking applications. This paper focuses on removing fog from hazy weather by applying image enhancement followed by image restoration. Rakovic et al.42 proposed a polarization method based on the effects of scattering on light polarization, which has been utilized as an edge to remove fog in images. Coulson et al.18 mentioned that photographers often employ polarizing filters to eliminate haziness in landscape photographs because the illumination from the landscapes is usually unpolarized. Jobson et al.19 incorporated Retinex theory, the most common method used for histogram equalization and its variants. However, this method does not always maintain good color fidelity. Oakley et al.20 described a contrast enhancement-based model where the scene geometry is known. The main cause behind contrast degradation is the atmospheric particles, such as haze and fog. To address these problems, a temporal filter structure is presented. The low spatial frequency will be removed if an enhancement-based method is applied locally. A considerable improvement in image quality is achieved using the contrast enhancement approach with a temporal filter.

Walker et al.58 focused on the polarization-based method to reduce haze in images. The main area of interest is underwater imaging. In most underwater imaging systems, the object is illuminated by the light source, which reduces visibility. An image-subtraction approach has been adopted here. This method has assumed that extra reverse scattered light primarily degraded the inverse image. Bu et al.59 proposed an approach of using a statistical model to detect the existence of airlight in any image. This method can be applied to both gray and color images. It can correct the contrast loss by estimating the airlight level. Monte Carlo simulation with a synthetic image model has been used to validate the accuracy of the method. This algorithm will produce an accurate result with the assumption that the airlight is constant throughout the image. But it fails in the case of non-uniform airlight over the image. Hautiere et al.33 developed a fast visibility restoration algorithm. The algorithm has addressed the depth maps ill-posed problem. This optimization problem has been formalized under the constraint 0A(x,y)W(x,y), where A(x,y) is airlight, and W(x,y) is a minimal intensity component for each pixel. The speed of this algorithm is its primary benefit. Its difficulty is just on the order of the number of image pixels. This speed is the reason for using it for real-time implementation of the fog removal algorithm. However, the recovered image quality is insufficient when there are disruptions in the scene depth.

Narasimhan et al.11 proposed an interactive method using user-defined depth and sky intensity information. Here, two types of user input have to be provided to interpolate scene point distance. One is the approximate location of the point at the largest distance, along with the distance increasing in nature. Next, the user has to provide approximate minimum and maximum distances. Other scene points distances can be interpolated as

Eq. (9)

d=dmin+α(dmaxdmin),
where α(0,1) refers to a tiny image distance of a pixel to the vanishing point. For d=dmin, α=0, and d=dmax, α=1. These interactive methods are practically not applicable for images where no depth information is available.

Kumar et al.34 improved the method proposed by Oakly and Bu59 in the case of non-uniform airlight over the image. As airlight contribution can be varied according to the region, this method uses region segmentation in order to estimate the airlight for each region. The RGB color is essentially required during airlight estimation. Three color components have been fused to generate a luminance image. A human visual model-based cost function has been then applied to the luminance image in order to estimate the airlight. This estimation can reflect the depth variation within the image. Linear regression is used to generate the airlight map subtracted from the foggy image to restore. This algorithm projects better results but does not cover a wide range of scene depth. Xuan et al.35 used graph-based image segmentation to get segments of the underwater color image. According to the blackbody theory, the transmission map has been derived at the initial level. The refinement of this map has been done by using a bilateral filter. The choice of control parameters for segmentation has become difficult in the case of foggy images. Tables 4 and 5 summarize the comparison of different conventional-based restoration methods with their outcomes and extracting techniques for image dehazing. Table 6 represents the other adopted techniques for non-learning-based methods on a review of different haze removal applications.

Zhang et al.43 applied polarization as a fog removal technique. Two or more pictures with varying degrees of polarization have been selected through which a fog removal technique is applied. There are no major or minor effects of any weather conditions on this method.

He et al.36 proposed an approach based on the dark channel prior (DCP) and soft matting, as summarized in Table 5. The DCP is based on the fact that most local areas in foggy outdoor images have pixels with low intensities in at least one color component. As a result (Fig. 5), the dark channel for an image is defined as

Eq. (10)

jdark(x)=minc{r,g,b}(minyΩ(x)(jc(y))),
where Ω is a local patch in the image and c{r,g,b}, (x,y)Ω. But the assumptions of this algorithm become invalid if the intensity of the scene objects is equal to the ambient light.

Fig. 5

Results obtained by He et al.36 using DCP technique:36 (a) hazy image, (b) recovered depth-map, and (c) dehazed image.

JEI_32_5_050901_f005.png

Color attenuation prior60 technique estimates the transmission map and removes haze through the atmospheric scattering model. Color changes and sky regions in the dehazed image appear significantly noisy when using DCP. Zhang et al.61 proposed an improved DCP-based technique that described the resolution of this problem by identifying the sky regions of the haze and computing the variability of atmospheric light and DCP. Finally, the brightness and contrast of the outcomes of reconstructed images are increased. After that, the transmission of non-sky and sky regions are separately evaluated as shown in Fig. 6.

Fig. 6

Results obtained by Zhang et al.61 using improved DCP technique,61 (a) hazy image, (b) before enhancement, and (c) after enhancement technique.

JEI_32_5_050901_f006.png

Xu et al.37 proposed a strategy based on the utilization of a three-dimensional (3D) model of the scene, as summarized in Table 5. After measuring the width at each pixel, reducing the influence of blur is as simple as applying a mathematical framework:

Eq. (11)

Ik=I0f(z)+A(1f(z)).

The initial intensity reflected toward the camera from the corresponding scene point is I0, A is the airlight, and f(z)=e(βz) is the depth-dependent attenuation intensity indicated as a function of distance owing to scattering.

González et al.44 developed an approach that involves taking many photographs of the same scene in various weather conditions. Xu et al.37 proposed a strategy based on the utilization of a 3D model of the scene, as summarized in Table 5. Modifications in scene pixel intensities across various weather conditions give easy restrictions for detecting scene depth disruptions and computing image structure.

Tripathi et al.38 developed a fog removal algorithm with two major steps: airlight map estimation followed by map refinement, as summarized in Table 5. The DCP has been selected for estimating the airlight, and refinement has been carried out using anisotropic diffusion, as presented in Fig. 7. Different objects may be at various angles and distances from the camera. As a function of those distances, airlight should vary for different objects. The requirement of above-distance inequality and inter-region smoothing can be fulfilled by anisotropic diffusion. The algorithm requires histogram equalization and stretching as pre-processing and post-processing, respectively. In case of an extreme color image (presence of pixel intensities 0 and 255), histogram stretching fails to produce a processed image.

Fig. 7

Results obtained by Tripathi et al.38 using anisotropic diffusion technique:38 (a) hazy image and (b) haze-free image.

JEI_32_5_050901_f007.png

Tan et al.62 suggested a method for converting a single color image to a multicolor image based on spatial regularization, as shown in Fig. 8. The author removed the fog by maximizing the contrast of the direct transmission while assuming a smooth layer of airlight. Here, the fog model is assumed as follows:

Eq. (12)

I(x,y)=I0(x,y)t(x,y)+I(1t(x,y)),
where I is the observed intensity of foggy image, I0 is the scene radiance, I is the atmospheric light, and t is the medium transmission. Based on this model, the author assumed that for a patch with uniform transmission t, visibility is reduced by the fog since t<1:

Eq. (13)

(x,y)I(x,y)=t(x,y)I0(x,y)<(x,y)I0(x,y).

Fig. 8

Results obtained by Tan et al.62 using contrast maximization technique,62 (a) input haze image, (b) direct attenuation model, and (c) airlight.

JEI_32_5_050901_f008.png

The result is regularized using the Markov random field model. Here, the restored image looks saturated and produces some halos of depth discontinuities in the scene.

3.2.

Learning-Based Methods for Single-Image Dehazing

Learning-based image dehazing is a crucial technique that involves the elimination of fog or haze from images. It plays a significant role in various applications, including visualization, surveillance, outdoor photography, and autonomous driving. The article defines some potential future directions for the advancement of image dehazing.

3.2.1.

Deep learning-based methods

The field of image dehazing has significant advancements with the emergence of deep learning techniques. The results achieved so far have paved the way for developing even more advanced deep-learning models for this task in the future.

The transmission map is estimated using a learning-based algorithm without prior knowledge. DehazeNet uses a convolutional neural network (CNN) to evaluate the transmission map.63 Coarse-scale network and fine-scale network predicted transmission maps are estimated locally in the novel hybrid approach multi-scale convolutional neural network (MSCNN).64 Fundamentals of the suggested single-image dehazing algorithm create hazy images and associated transmission maps using a depth image dataset to train the multi-scale network.64 In the test stage, employing the model that was trained to estimate the transmission map of the input hazy image, then generating the transmission map to produce the dehazed image. MSCNN64 uses a coarse-scale network to predict a comprehensive transmission map from a hazy image and transmits it to the fine-scale network to produce an enhanced transmission map. The coarse-scale network has been developed into four major parts, which are CNN, max-pooling, up-sampling, and linear combination. The convolutional module with a feature map can be defined as

Eq. (14)

frk+1=σ(s(fsk*lr,sk+1)+csk+1),

Eq. (15)

frk+1(2m1:2m,2n1:2n)=frk(m,n),
where fsk and fsk+1 denote features maps of layer k and the next k+1 layer. lr,sk+1 is the kernel with the size (r,s), and csk+1 denotes the bias with σ as the rectified linear unit (ReLU)65 activation function. frk(m,n) is the feature map of the pixel value at the location (m,n) with a max-pooling size of 2×2 in the upsampling layer. CNN architecture for deep learning, as opposed to end-to-end mapping.66 All-in-one dehazing (AOD) net employs linear mapping to integrate transmission maps (T(x)) and atmospheric light (A) and uses CNN to learn its parameter.67 The mathematical equation is formulated on the AOD-net67 methods as

Eq. (16)

J(x)=C(x)I(x)C(x)+b,

Eq. (17)

C(x)=1T(x)(I(x)A)+(Ab)I(x)1,
where 1T(x) and A are combined into the new module C(x) and this module is dependent on the input image I(x), where b is the constant bias and J(x) is the output dehazed image.

A multiclass CNN68 is employed to select an optimal window range. Subsequently, a vectored minimum mean value-based detection technique is applied to the pixel currently under operation within a specific image kernel to identify noise and choose the best window size around the pixel. The affected pixel is processed using an adaptive vector median filter69 integrated with particle swarm optimization (PSO)70 if it is determined that the pixel is corrupted after differentiating between haze and haze-free pixels. The novel fusion method directly restores a clean image from a foggy input image.71 Frants et al.72 developed a novel quaternion neural network (QCNN-H) that demonstrated their improved performance capacity for single image dehazing. The methods provided a novel quaternion encoder–decoder structure with multilevel feature fusion and quaternion instance normalization. Quaternion operations73 enable modeling spatial relations involving rotation for real-time computer vision and deep learning applications. The advantages of QCNNs74 provide a desirable option for improving efficiency on different computational image processing and visualization tasks, especially when combined with a newly developed effective quaternion convolutions technique75,76 built around matrix decompositions. Quaternion convolution is characterized by the Hamiltonian product of a quaternion input, denoted as Q^, and a quaternion convolutional kernel, represented by W^. Real-valued convolution is used to operate on the quaternion feature map’s constituent parts. The real component of the input quaternion feature maps is captured by the first group, which consists of K4 feature maps. There are then three more groups with K4 feature maps each, which are used to represent the hypothetical components that correspond to the i,j, and k quaternion elements, accordingly.

The method performs quaternion-valued convolution on the input quaternion Q^, which is written as Q^=Q0+Q1i+Q2j+Q3k and kernel W^=W0+W1i+W2j+W3k is defined as

Eq. (18)

Q^=W^Q^=(W0Q0W1Q1W2Q2W3Q3)+(W0Q1+W1Q0+W2Q3W3Q2)i+(W0Q2W1Q3+W2Q0+W3Q1)j+(W0Q2+W1Q2W2Q1+W3Q0)k.

The method increases training time and decreases the risk of vanishing gradients by correctly randomizing the network weights (W). The technique enhances efficiency in both uniform and nonuniform hazy weather conditions.

Retinex-based13 suggested method such as robust Retinex decomposition network (RRDNet)77 has been implemented for an image’s restoration process. The reflection, illumination, and haze are each estimated using one of the three parts of RRDNet. Gamma transformation78,79 is computed to modify the brightness map and haze-free reflectance. The restored output is produced when the modified illumination and restored reflectance are combined.

The mathematical expression of the robust Retinex model has been defined as

Eq. (19)

Ir(X)=R(X)·L(X)+N(X),
where Ir, R, L, and N are denoted as underexposed image, reflectance, illumination, and haze, respectively. The illuminating element is modified during the restoration process using a gamma transformation as L^(X)=L(X)γ where γ denotes the adjustable parameter. The haze-free reflectance can be defined as R^(X)=(Ir(X)N(X))L(X) after combining the modified illuminating and the outcome can be calculated as

Eq. (20)

Ir^(X)=R^(X)·L^(X).

The hazy images are divided into detail, and the base components are further improved; hazy and haze-free base parts are mapped.80 These models are expected to incorporate more advanced architectures, such as attention mechanisms and generative adversarial networks, in order to enhance their performance further.

Attention mechanisms

Incorporating attention mechanisms into dehazing models can prove highly beneficial since they enable the model to concentrate selectively on specific regions of the input image. Different approaches, such as spatial and channel attention, can be employed to integrate attention mechanisms into dehazing models to minimize the feature loss between encoder and decoder modules.8183

An illustration of attention mechanism-based dehazing methods includes the AOD-Net67 and AdaFM-Net.12 The AOD-Net67 adopts a multi-scale CNN architecture64 that incorporates a spatial attention module. This mechanism allows the model to selectively attend to significant regions of the input image. On the other hand, the AdaFM-Net works as an adaptive feature-based modulation technique to amplify or suppress features based on their relevance, providing the model with the flexibility to adjust to changing levels of importance in different regions of the image.12 The methodology suggested a continuous modulation technique by incorporating an adaptive feature modification layer associated with the modulating approach, also intended to add a module to the system for adjusting the statistics of the filters to a different restoration level. Following each convolution layer, before applying the activation function ReLU,65 include a depth-wise convolution layer, which is formulated as

Eq. (21)

AdaF(Xi)=Gi*Xi+bi,0<iN,
where Xi denotes the input feature map of the image, Gi and bi are the filter and bias, and N denotes the number of image features. The batch normalization (BN) layer is also incorporated in the AdaFM-Net module, which is formulated as follows:

Eq. (22)

AdaF(Xi)=GiXi+bi,BN(Xi)=α(Xiμδ)+β,
where β and μ are the standard deviations and mean of the batch size, α and δ are denoted as affine parameters.

Zhou et al.84 proposed an attention-based feature fusion network (AFF-Net) for low-light image dehazing. The AFF-Net comprises a feature extraction module, an attention-based residual dense block (ABRDB), and a reconstruction module. The ABRDB includes a spatial attention mechanism that focuses on the selective regions of the input image while suppressing the non-selective regions. The attention mechanism is learned through trainable weights, which are updated during training. The light invariant dehazing network (LIDN)85 has been introduced for end-to-end real-time image dehazing. In order to train the LIDN, the quadruplet loss coefficient86 is implemented, which results in a sharper dehazed image and fewer artifacts. The method has a faster inference time and produces accurate accuracy.

Xiao et al.87 developed a blind image dehazing technique based on deep CNN. The network consists of three modules, which are the perceptual enhancement, feature extraction, and regression network module. The technique can estimate the accuracy of scores of real-time image dehazing and effectively develop visual representations of features. The perceptual section with the attention module and multiscale convolution module has been used to extract the perceptual feature for image dehazing prediction. The perceptual enhancement module (Ie) with an input image feature denotes I and size X/2×(H×W). The channel-wise statistics Is with average pooling can be defined as

Eq. (23)

Is=F(I)=1H×Wm=1Hn=1WI(m,n),

Eq. (24)

Ie=W1I,
where W1 denotes the weights of the attention module and is an elementwise multiplication operator. The final score (S) of the feature fusion module for image dehazing is calculated as

Eq. (25)

S=j=1Nisjwjj=1Niwj,
where sj, Ni, and wj are denote quality score, sample image features, and weight of image features module, respectively.

Yin et al.88 proposed a spatial and channel-wise feature fusion model based on Adam’s hierarchy for image dehazing. The network consists of a lightweight spatial attention module followed by an Adams module and a combining hierarchical feature fusion module. The Adams module (multi-step optimal control method) uses a gating mechanism to selectively filter out the haze in the input image, whereas the channel-wise attention module is designed to enhance the features in the input image by selectively weighting the feature channels.

Liu et al.89 introduced an attention-based local multi-scale feature aggregation network (LMFAN) to solve image dehazing. The LMFAN consists of three main components: a multi-scale feature extraction module, a multi-scale attention module, and a reconstruction module. The multi-scale attention module comprises a global attention mechanism that detects the overall haze distribution in the input image and a local attention mechanism that identifies the local texture information. For each channel, distinctive horizontal and vertical encodings are applied to a specific input I, utilizing spatial ranges defined by the pooling kernel as (H,1) or (1,W). Consequently, the output of channel o at height r can be expressed as

Eq. (26)

zor(r)=1W0pWIo(r,p),

Eq. (27)

zos(s)=1H0qHIo(q,s),
where p and q are the positions of the pixel, and s is the width of the channel o. The feature map of the network has cascaded with 1×1 convolution, and the feature map (f) is defined as

Eq. (28)

f=α(F(Zr,Zs)),
where α is the RELU activation function, and the middle feature map is fRc/n×(H+W).

Attention mechanism-based methods have exhibited promising results, and continued research in this field has the potential to enhance real-time image dehazing techniques for numerous applications. The significant advantages and limitations are listed as shown in Table 7.

Table 7

Advantages and limitations of different attention-based networks for image enhancement and restoration.

Attention-based networkAdvantagesLimitations
Convolutional block attention module90Learning to attend to spatial and channel information improves performance on tasks such as object detection and image enhancement.The methods are computationally expensive, mainly when applied to large images, and not adequate for long-range dependencies
Multi-path attention block91It can improve accuracy and reduce overfitting in various vision tasksIt may require significant hyperparameter tuning for real-time application
It can be applied to different network architectures, including convolutional and recurrent networks
Non-local neural networks92The neural network’s performance is significantly enhanced by the attention-free non-local method, which also ensures a fast and lightweight approachThe methods are not suitable for real-time dense foggy conditions
Spatial channel attention residual network (SCR-Net)93The SCR-Net utilizes extensive feature data to blend input images into high-quality outputs in a more expressive and scientifically rigorous way, adapting to the characteristics of each input imageSCRnet models are typically more intricate than traditional attention models, which could potentially increase the challenges in fitting and interpreting them
Deep layer aggregation94The network is applicable for multi-scale contextual information in a dense foggy imageThe method is computationally expensive, especially for rainy images

In scenarios for real-time applications where the fog is excessively dense, attention-based mechanisms may not perform well since they depend on the visible features in the input image. The lack of adequate information in the real-time input images can hinder the ability of the attention mechanism to effectively dehaze the images. This is particularly challenging in extreme all-weather conditions.

Generative adversarial networks based image dehazing

Generative adversarial nets (GAN) have several uses, including text-to-picture and image-to-image translation.66,95,96 It has been suggested to use a U-net architecture97 for the generator, which directly maps input to output image and aids in restoring signal independence from noise. WaterGAN is a technique that produces an accurate depth map from an underwater image.98 Hierarchically nested GANS enhance both picture fidelity and visual constancy.99 Dehaze-GAN, a reformulation of GANs into an atmospheric scattering model, has been developed by Zhu et al.100 Dehaze-GAN comprises a generator, denoted as G, and a discriminator, denoted as D, which undergoes alternate training to engage in a competitive process. D is refined to effectively discern synthesized images from genuine ones, and G is trained to D by generating counterfeit images. More specifically, the optimal states of G and D are achieved through participation in the following two-player minimax game:

Eq. (29)

G,D=argminGmaxDγ(D,G,X,Z),
where X and ZN(0,1) denote the input image and random noise, and γ is the final GAN, which is typically expressed as EX[logD(X)]+EZ[log(1D(G(Z)))].

CycleGAN, which enhances visual quality, has been created by combining sensory loss and cycle consistency.101 Integrated GAN developed by booster EPDN has been proposed by Qu et al.102 An extension of information theory, GAN, which can develop disentangled representations unsupervised has been proposed.103 Disentangled representative modeling GAN allows for the learning of discriminative and generative representation. It is frequently employed in face and emotion recognition. Due to the limitations of one-to-one image translation, multimodal unsupervised image-to-image translation architecture has been developed that breaks down images into style and information codes. A novel strategy that leverages adversarial training for physical prototype translation has been presented to address the shortcomings of image-to-image translation.104

Yang et al.105 proposed a unique deep-learning-based technique using dark channel and transmission map on the haze model. This is achieved by creating an energy model in a proximal dehazeNet as shown in Fig. 9.

Fig. 9

Results obtained by Yang et al.105 using proximal dehazeNet technique105 (a) input haze image and (b) dehaze image.

JEI_32_5_050901_f009.png

Cai et al.63 described an end-to-end system on dehazeNet for transmission estimation. The layers of CNN in the dehazeNet are designed to stand for prior image dehazing. BRelu (bilateral rectified linear unit) based non-linear activation functions have also been used to enhance the quality of recovered images. The medium of transmission map estimation is introduced to achieve haze removal in dense haze conditions, as presented in Fig. 10.

Fig. 10

Results obtained by Cai et al.63 using dehazeNet technique,63 (a) input haze image and (b) dehaze image.

JEI_32_5_050901_f010.png

The GAN-based techniques have shown great potential in generating high-quality dehazed images for simple scenes. GANs have demonstrated encouraging outcomes in addressing image dehazing tasks. However, GAN-powered methods for image dehazing possess certain limitations. Their performance may not be as impressive for more complex scenes that contain multiple objects or structures. This is because the generator employed in such cases may not capture all the intricate details in the input image.

4.

Experimental Evaluation and Dehazing Dataset

The study of different dehazing techniques offers experimental measurement and the effectiveness of state-of-the-art methods. The quantitative and qualitative measures to compare the effectiveness of the dehaze methods are evaluated. Among the different haze removal, the most frequently used methods include Fattal,39 Tarel,40 He,36 MSCNN,64 AOD Net,67 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet,13 which have been reviewed and experimental evaluation carried out on the different available dehazing datasets.

4.1.

Dehazing Datasets

Dehazing datasets comprise a set of indoor and outdoor images utilized to train and evaluate algorithms to eliminate fog or haze from real-time images. Usually, these datasets contain a series of paired images, consisting of hazy or foggy versions and corresponding ground truth clear images, which are employed for evaluation and training purposes. However, obtaining a substantial real-world dataset for dehazing purposes is challenging due to the difficulty in collecting dehazed images, thereby limiting the size of the available dataset. Table 8 presents information on the commonly employed dehazing datasets, with specifics on their respective details.

Table 8

Summary of most relevant dehazing datasets.

DatasetSizeResolutionRemarks
NYU267,105,1062.4 GBThe resolution of the RGB images is 640 × 480 pixels, and the depth images have a resolution of 320 × 240 pixelsThe dataset has 1449 labeled pairs of RGB and depth images, and each pair comes with corresponding camera calibration data.
Make3D107685 MB1224 × 368The dataset contains 22,600 training images and 697 test images
RESIDE71,1081135.00 GB640 × 480The dataset contains 14,520 training images and 850 test images
HazeRD114116640 × 480The dataset contains 75 synthesized hazy images and offers precious resources for evaluating dehazing algorithms in outdoor environments that better simulate real-world conditions
SOTS117,118435 MB480 × 640The RESIDE-standard subset contains 6000 RGB-D images and subsets include scenes with various indoor layouts
O-Haze113,119121547 MB960 × 1280The dataset contains 45 outdoor images, which is relatively small compared to other image datasets (homogeneous haze)
D-Hazy1132 GB1000×1500The dataset contains 55 outdoor and indoor images, which is relatively larger compared to O-Haze datasets (homogeneous haze)
I-Haze113,120,121312 MB5616 × 3744The dataset contains 30 images, which is relatively smaller than O-Haze and D-Hazy datasets (homogeneous haze)
NH-Haze120122330 MB5456 × 3632The dataset contains 55 pairs of outdoor dense haze images. The dataset is the pioneering collection of realistic images for dehazing, featuring both non-uniformly hazy images and their corresponding haze-free ground truth counterparts
Haze4K1232833 × 4657The dataset contains 4000 hazy images. Each hazy image in the dataset is paired with its corresponding ground truths, which include a latent clean image, a transmission map, and an atmospheric light

4.2.

Implementation Details

The different state-of-the-art methods are compared on a Win10 operating system, utilizing an Intel® Core™ i5-8265U CPU, 16 GB RAM, and GPU NVIDIA GeForce MX250 with a 32 GB memory capacity. The deep learning library with Pytorch1.8.1 is used, here Adam serving as the model optimizer. We set the initial learning rate at 0.0001 and the total training epoch to 60. To optimize GPU memory and computational efficiency, we set the batch size to 20. Upon completion of the training phase, we conduct model inference using half-precision to conserve GPU memory and enhance processing efficiency.

4.3.

Quantitative Measurements

Adverse weather causes a number of road and rail accidents each year. For example, seven people were killed when a vehicle collided due to heavy fog in Haryana.124 Two cars coming from Chandigarh were hit by another car. The accident took place as there was limited visibility due to heavy fog.124 Figure 11 presents the number of accidents caused due to adverse weather in India during 2017 to 2021.125128 Image dehazing is a challenging problem, requiring refined algorithms that can effectively estimate the depth of the scene, and remove the scattering and absorption effects of the haze. These algorithms must be able to operate in real-time, with limited computational resources, and be robust to variations in lighting, weather conditions, and other environmental factors. The quality of the haze removal algorithm is evaluated using quantitative measurements, as shown in Table 9.

Fig. 11

Accidental statistics due to different adverse weather conditions in India.125128

JEI_32_5_050901_f011.png

Table 9

Comparison of the quantitative experiment of different dehazing methods for single image dehazing.

DatasetsMethodsFattal39Tarel40He36MSCNN64AOD Net67dehazeNet63Dehaze-GAN100SCR-Net93QCNN-H72Deep CNN87LIDN85RRANet13
NYU267,105,106PSNR0.380.340.390.340.310.280.320.310.360.340.390.37
SSIM0.890.650.820.940.850.790.950.940.910.920.940.93
NIQE3.042.842.812.872.563.092.672.542.142.111.241.29
VI0.850.910.940.920.910.940.950.960.940.920.910.90
RI0.810.870.860.900.920.970.950.980.910.930.940.96
Elapsed time (s) size: 640 × 48021.3216.3418.4514.5918.785.893.742.802.812.572.342.12
Make3D107PSNR0.310.350.320.370.310.350.340.320.290.360.370.38
SSIM0.820.690.880.930.880.850.960.910.930.940.960.95
NIQE2.142.872.613.242.062.991.972.241.992.342.232.16
VI0.870.920.950.840.860.870.910.970.920.960.940.93
RI0.800.910.840.920.930.950.970.960.930.920.930.95
Elapsed time (s) size: 640×48024.1517.8915.3412.1611.658.974.593.581.321.641.872.61
RESIDE71,108113PSNR0.250.310.340.310.330.360.330.300.340.320.370.35
SSIM0.870.750.810.910.840.890.970.940.960.950.930.95
NIQE3.043.193.273.102.861.082.882.121.231.341.111.32
VI0.880.870.910.930.920.940.970.940.960.950.940.98
RI0.870.820.920.900.930.960.960.920.970.920.930.98
Elapsed time (s) size: 640×48011.0214.593.5617.8518.9011.695.881.471.871.692.051.88
HazeRD114116PSNR0.240.390.330.370.310.370.330.350.380.370.370.39
SSIM0.810.840.820.940.860.880.910.970.980.970.950.96
NIQE3.363.143.782.942.662.542.112.042.012.212.132.12
VI0.850.820.840.880.890.910.950.970.980.970.960.95
RI0.820.830.850.890.900.930.960.980.970.960.940.95
Elapsed time (s) size: 640×4808.517.117.547.368.475.472.561.972.162.042.362.45
SOTS117,118PSNR0.230.310.320.360.370.360.310.360.370.380.380.39
SSIM0.840.810.890.910.890.940.970.980.980.970.950.96
NIQE4.253.973.883.643.012.972.652.142.032.082.152.19
VI0.850.900.940.900.920.960.970.960.940.950.980.97
RI0.810.860.930.910.930.970.970.970.920.930.980.96
Elapsed time (s) size: 640×4809.788.584.575.876.144.885.311.361.301.591.871.46
O-Haze113,119121PSNR0.320.330.370.370.340.370.300.360.350.390.370.38
SSIM0.860.840.880.940.920.950.960.930.980.990.960.97
NIQE3.123.023.613.193.092.262.312.232.122.092.622.39
VI0.810.920.940.930.950.940.960.960.990.980.960.97
RI0.850.890.890.910.930.920.960.970.970.980.970.95
Elapsed time (s) size: 640×4808.148.219.1110.239.489.026.214.783.243.013.293.33
D-Hazy113PSNR0.300.270.340.350.370.310.370.370.360.380.390.41
SSIM0.810.820.890.910.910.920.940.950.960.970.980.98
NIQE4.193.853.603.123.202.642.102.192.182.111.871.94
VI0.850.930.950.940.910.940.930.960.960.980.980.96
RI0.820.890.960.930.920.970.950.970.930.980.970.96
Elapsed time (s) size: 640×48010.1216.1117.4112.3611.6715.219.453.803.873.563.153.02
I-Haze113,120,121PSNR0.270.240.260.350.380.300.340.360.350.370.390.38
SSIM0.870.810.870.920.910.970.950.940.970.970.980.97
NIQE3.653.033.093.183.792.011.892.062.031.992.071.82
VI0.840.920.940.930.900.920.950.970.980.990.970.98
RI0.830.900.850.910.940.910.960.970.970.980.970.97
Elapsed time (s) size: 640×4806.987.826.5711.8710.518.776.813.453.553.153.272.86
NH-Haze120122PSNR0.260.240.250.310.370.300.360.340.370.380.400.39
SSIM0.880.890.910.930.910.970.950.980.980.970.980.97
NIQE2.342.492.682.312.262.192.072.102.142.012.062.09
VI0.860.940.960.940.950.950.970.960.970.980.970.98
RI0.850.930.960.930.950.960.960.980.980.970.980.98
Elapsed time (s) size: 640×4807.978.459.2510.567.328.789.122.872.473.002.842.14

In haze removal methods, quantitative measurements are divided into two types: a ground truth image is provided, with another is not provided ground truth image. The ground truth image is provided or not provided for the quality measurements such as peak signal-to-noise ratio (PSNR),129,130 mean squared error (MSE),130 structural similarity index metric (SSIM),61,129 natural image quality evaluator (NIQE),131 visibility index (VI),132137 and realness index (RI).72,85,87 Ground truth image is a haze-free image of the same original hazy image. However, haze-free is produced when some haze removal techniques are applied to the hazy image datasets. MSE130 has measured the error that is evaluated to compare the ground truth and dehaze image. The mathematical expression of MSE is as follows:

Eq. (30)

MSE=1M×Np=1Mq=1N[GTI(p,q)DI(p,q)]2.

GTI(p,q) represents the pixel intensities of the ground truth image and DI(p,q) is the pixel values of the dehaze image, where p and q denote the feature value of image pixel coordinates, respectively.

PSNR129,130 is applied to evaluate MSE after applying dehazing methods. Maximum PSNR values represent the visibility of the image is enhanced, and PSNR can be represented as follows:

Eq. (31)

PSNR=10log10(2552/MSE).

The SSIM129 is the similarity between images with and without haze. SSIM always lies between 0 and 1. When the SSIM value is close to 1, the two images are mostly similar. The SSIM score around the pixels can be calculated as follows:

Eq. (32)

SSIM(p)=2μx(p)μy(p)+C1μx2(p)+μy2(p)+C1·2σxy(p)+C2σx2(p)+σy2(p)+C2,
where μx(p) and σx(p) represent the mean and standard deviation of the dehaze image Iout, respectively. Similarly, μy(p) and σy(p) represent the mean and standard deviation of the haze image Iin, respectively, and σxy(p) represent the covariance between σx(p) and σy(p).

The NIQE131 is built around developing a high standard aware set of statistical attributes that use a simple but effective space area on a typical scene. The NIQE131 measured is a technique for evaluating an image’s naturalness that uses a model of the human visual technique’s response to natural scene images. The NIQE calculates an image’s naturalness by comparing its statistical features with natural scene images. The NIQE can be expressed numerically as

Eq. (33)

NIQE=W1*Feature1+W2*Feature2+W3*Feature3++Wn*Featuren,
where W1,W2,W3,.Wn are the weights that are supported by each images feature:

Eq. (34)

Featuren=E[rn]+α*σn,
where E[rn], α, and σn denote the expected values of images feature, scaling factors, and standard deviation, respectively.

The VI132138 evaluates the quality of an image that has been hazed or dehazed. It compares the visibility of the image in question to a clear reference image. This resemblance is calculated by analyzing transmission and gradients. Koschmieder’s law133,134 reveals that the degree of haze is directly proportional to transmission. As a result, the similarity between the transmission maps of the hazy image and the reference can be used to estimate the amount of haze present. The transmission information135 of the dehaze and hazy images are T1(R) and T2(R) at pixel R, the transmission similarity St(R) is defined as

Eq. (35)

St(R)=2T1(R).T2(R)+C1T12(R)+T22(R)+C1,
where C1 is a constant with a positive value chosen to improve stability. The transmission map is defined as T(R)=eβd(R) where β and d(R) are extinction coefficients and observing distances of R.

Eq. (36)

VI=R=ΩSGM(R)·[St(R)]αTp(R)R=ΩTp(R),
where SGM(R) is represented by the gradient module that can explore the gradient features of images. Mathematically, it is defined as

Eq. (37)

SGM(R)=2G1(R).G2(R)+C2G12(R)+G22(R)+C2,
where G(R)=Ga2(R)+Gb2(R) and R is the feature of images. Ga2(R) and Gb2(R) are partial derivatives of the image at R. G1(R) and G2(R) are the gradient modules of the dehaze and haze images, α is the adjustable parameters between the gradient module and transmission map. Several dehazing techniques13,72,85,87,132137,139147 introduce artifacts or distort the image, which reduces visibility. As an outcome, while the initial hazy images are original images without these degradations, some methods also have to consider the accuracy of the dehazing outcomes when evaluating a dehazing technique. Thus, the RI72,85,87,138 evaluates the dehazed image’s reality by utilizing the similarity between the dehazed image and the haze-free reference in feature spaces. The RI is defined as

Eq. (38)

RI=R=ΩSPCon(R)·[Scf(R)]βWm(R)R=ΩWm(R),
where SPCon(R) is the phase congruency module145 of the feature similarity index (FSIM),146 which is calculated as

Eq. (39)

SPCon(R)=2Pcon1(R).Pcon2(R)+C3Pcon12(R)+Pcon22(R)+C3,
where Pcon1 and Pcon2 are the chrominance features of the different images and Scf(R) is the total similarity of chrominance features, which is defined as

Eq. (40)

Scf(R)=2F1(R).F2(R)+C4F12(R)+F22(R)+C4·2G1(R).G2(R)+C4G12(R)+G22(R)+C4,
where F1(R), G1(R), F2(R), and G2(R) are chrominance features extracted from two different images, and C3, C4 are positive constants and Wm(R) is the FSIM146 weights of maximum chrominance features in multiple orientations, where β is the adjustable parameter between the phase congruency and chrominance features module.

The quantitative measurements of the different dehazing techniques are described in Table 9. A comparison to a wide range of cutting-edge methods, that include Fattal,39 Tarel,40 He,36 MSCNN,64 AOD Net,67 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet13 was conducted on different benchmark datasets. The authors provided some models based on information techniques trained on indoor and outdoor dehazing datasets. The NIQE131 has been employed to measure the naturalness of the haze-free image, in which lower results showed more accurate visual efficiency, and the VI132137 and RI72,85,87 were employed for additional application to measurement the accuracy of real-time dehaze images. Increased SSIM, PSNR, VI, and RI scores demonstrate greater efficiency. In the case of NIQE, a lower result indicates improved visibility.

Table 9 shows the PSNR, SSIM, VI, and RI methodologies with the highest visibility restoration on different benchmark datasets. He,36 dehazeNet,63 Dehaze-GAN,100 SCR-Net,93 LIDN,85 QCNN-H,72 RRANet,13 and Deep CNN87 on the NYU2,67,105,106 Make3D,107 RESIDE,71,108113 HazeRD,114116 SOTS,117,118 O-Haze,113,119121 D-Hazy,113 I-Haze,113,120,121 and NH-Haze120122 datasets can be attained higher SSIM and PSNR values (bold) also outperform on perception metrics NIQE, VI, and RI (bold). Table 9 also demonstrates that recently introduced methodologies, such as QCNN-H,72 Deep CNN,87 LIDN,85 and RRANet,13 surpass other dehazing methods (Fattal,39 Tarel,40 He,36 AOD Net,67 dehazeNet63) in terms of VI and RI by effectively eliminating non-homogeneous weather and enhancing sharpness through its trained datasets. The time-consumption comparison of benchmarking dehazing methods with resolutions of images 640 × 480 is also performed in Table 9 on different indoor and outdoor datasets. RRANet13 method has a faster processing time on GPU NVIDIA GeForce MX250 with NYU2,67,105,106 I-Haze,113,120,121 and NH-Haze120122 datasets. Similarly, QCNN-H72 and SCR-Net93 methods also have faster processing time on Make3D,107 SOTS,117,118 RESIDE,71,108113 and HazeRD114116 datasets that are executed using the same GPUs for real-time dehazing applications.

4.4.

Qualitative Measurements

Several haze removal techniques from the aforementioned categories have been chosen to be evaluated primarily for performance analysis to compare the effects of various techniques and test the efficiency of qualitative evaluation (Figs. 5Fig. 6Fig. 7Fig. 8Fig. 910). Different real-time outdoor haze image has been selected as an experimental evaluation of images by comparing the other methods, including Fattal,39 Tarel,40 He,36 AOD Net,67 dehazeNet,63 Dehaze-GAN100 and SCR-Net,93 QCNN-H,72 deep CNN,87 LIDN,85 and RRANet13 in Figs. 12Fig. 13Fig. 14Fig. 15Fig. 16Fig. 1718 on the NYU2,67,105,106 RESIDE,71,108113 HazeRD,114116 SOTS,117,118 O-Haze,113,119121 D-Hazy,113 and NH-Haze120122 datasets, respectively.

Fig. 12

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the NYU267,105,106 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f012.png

Fig. 13

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the RESIDE71,108113 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f013.png

Fig. 14

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the HazeRD114116 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f014.png

Fig. 15

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the SOTS117,118 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f015.png

Fig. 16

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the O-Haze113,119121 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f016.png

Fig. 17

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the D-Hazy113 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f017.png

Fig. 18

Evaluation of experimental results on the real-time hazy outdoor image using different state-of-the-art techniques on the NH-Haze120122 dataset. (a) Hazed image, (b) Fattal,39 (c) He,36 (d) Tarel,40 (e) AOD Net,67 (f) dehazeNet,63 (g) Dehaze-GAN,100 (h) SCR-Net,93 (i) QCNN-H,72 (j) deep CNN,87 (k) LIDN,85 and (l) RRANet.13

JEI_32_5_050901_f018.png

Figures 12(c) and 12(e) demonstrate that He36 and AOD Net67 generate unwanted noise while also losing the real original colors of the dehazing images. On the other hand, the Dehaze-GAN,100 QCNN-H,72 and Deep CNN87 methods effectively remove haze from the real-time image without color loss, as evidenced by Figs. 12(g), 12(i), and 12(j), but these methodologies are not working correctly in non-uniform weather conditions on the NYU267,105,106 dataset. LIDN85 frequently leaves hazy areas behind and is inconsistent in removing haze, as shown in Fig. 12(k). When there is a dense haze, RRANet13 has trouble, as shown in Fig. 12(l). As shown in Figs. 13(b)13(d), non-CNN methods such as He,36 Fattal,39 and Tarel40 are more likely to excessively boost the contrast of hazy images. As a result, the dehazing methods generate a large number of artifacts that significantly reduce the reality of the restored images. On the other hand, as shown in Figs. 13(e)13(l), CNN-based techniques such as AOD Net,67 QCNN-H,72 LIDN,85 Deep CNN,87 RRANet,13 dehazeNet,63 SCR-Net93 and Dehaze-GAN100 can generate outcomes that are very similar to real-world images. As a result, virtually all CNN-based methods perform better than non-CNN-based methods in terms of RI and NIQE. In Figs. 13(d), 13(f), 12(i), and 12(j) the Tarel,40 dehazeNet,63 QCNN-H,72 and Deep CNN87 methods were visually evaluated and demonstrate their ability to eliminate the color cast and fog from the real-time image. The fast visibility restoration40 method is based on the parameter changes of the transfer function using an optimized way. The dehazeNet63 method produces an enhanced image that only partially removes the color cast and is ineffective in recovering the genuine color information.

As observed in Figs. 14(e)14(h) utilizing AOD Net,67 dehazeNet,63 Dehaze-GAN,100 and SCR-Net93 methodologies, it becomes evident that the visibility of the images experiences significant enhancement. In contrast, when evaluated using SSIM and PSNR, most other measurement approaches fail to rank these improvements effectively. In addition, VI and RI are measured on QCNN-H,72 LIDN,85 Deep CNN,87 and RRANet,13 effectively increasing visibility in Figs. 14(i)14(l) on outdoor heterogeneous images. The images in Figs. 14(i) and 14(k) have some artifacts and have been slightly degraded. The attention-based enhancement approach eliminates color cast and reinstates the original color features, as it considers both color and contrast as the primary parameters for enhancement. However, the methods fall short of enhancing the overall brightness of the degraded input image, as shown in Figs. 14(c), 14(d), and 14(g). The images of Figs. 14(h) and 14(j) also experience severe degradation and glaring halo effects. In contrast to other metrics, RI can evaluate these differences and produce results that are consistent with human perceptions.

Figure 15 shows the evaluation of the real-time non-homogeneous image of different state-of-the-art methods on the SOTS117,118 dataset. But Fattal,39 He,36 and Tarel40 fail to produce fog-free images in non-homogeneous conditions that are shown in Figs. 15(b) and 15(d). The conventional enhancement methods are not appropriate for defogging because they are unable to address the degradation restored by fog, which is closely associated with the depth of the scene. While the QCNN-H,72 RRANet,13 and LIDN85 techniques exhibit strong performance on the O-Haze113,119121 dataset as shown in Figs. 16(i), 16(l), and 16(k), this success is primarily attributed to overfitting. Unfortunately, when applied to the genuine SOTS117,118 dataset, these methods prove to be ineffective. Figures 16 and 17 show the real-time qualitative evaluation on dense foggy images, where the Tarel40 and AOD Net67 visibility range not more than 100 m and highly distorted the original image’s color. The SCR-Net93 and RRANet13 methods successfully eliminate fog while exhibiting fewer color distortions, as shown in Figs. 16(h), 16(l), 17(h), and 17(l). Moreover, the dehazed image produced by methods (Figs. 16 and 17) resembles the dehaze image on the O-Haze113,119121 and D-Hazy113 datasets, respectively.

Qualitative analysis of various methods on real-time dense foggy images is presented in Fig. 18 on the NH-Haze120122 dataset. The outcomes of Fattal39 and dehazeNet63 exhibit color distortion, and the result produced by Dehaze-GAN100 in Fig. 18(g) suffers from over-brightening when compared to the original dehaze image. Although AOD Net67 and SCR-Net93 successfully removed the fog, some fog residue remains in the defogged output. It can be seen that the QCNN-H,72 LIDN,85 Deep CNN,87 and RRANet13 techniques execute better than all other approaches that were compared and have the ability to preserve the image’s color and contrast, as shown in Figs. 18(i)18(l) on the NH-Haze120122 dataset.

The main focus of the qualitative experiment is to restore image visibility and enhance the quality of images using the available datasets that are described in Table 8. However, all the methods improve the visibility effect on different haze conditions. The literature also provided some dehazing applications on standard datasets that are used in learning-based end-to-end haze removal procedures. The application and dataset used in the learning-based haze removal methods are displayed in Table 10.

Table 10

Different dehazing methods validated on benchmark datasets and application.

YearModelApplicationDatasetsQuantitative measure
2018AOD-Net67DehazingNYU2PSNR, SSIM, MSE, mAP
Conditional generative adversarial network107Dehazing, image in the paintingMake3D, NYUPSNR, SSIM
Gated fusion network71Image editingRESIDEPSNR, SSIM
Proximal dehaze-net105Image enhancementNYU2PSNR, SSIM
2019Wavelet U-Net110Edge enhancementRESIDEPSNR, SSIM, MSE, mAP
NIN-DehazeNet111Video dehazingRESIDEPSNR, SSIM
Semi-supervised114Real-time image dehazingRESIDE-C, HazeRDPSNR, SSIM
Deep multi-model fusion115DehazingBenchmark datasetPSNR, SSIM
2020Domain adaptation148DehazingHazeRD, SOTSPSNR, SSIM
Y-net112Halo artifactsRESIDEPSNR, SSIM
Dual-path recurrent network106Color correctionNYU2 depth, ImageNetPSNR, SSIM
Pyramid channel149Image dehazingRESIDEPSNR, SSIM, mAp
Reinforced depth-aware116Image dehazingHazeRD, NYU, MiddleburyPSNR, SSIM
2021You only look yourself119Image dehazingRESIDE, I-Haze, O-HazePSNR, SSIM, inference time
CycleGAN150Underwater image dehazingUnderwater imageryMSE, RMSE, Euclidean distance, SSIM
Haze concentration adaptive network113Image details recoveryRESIDE, D-Hazy, I-Haze, O-HazePSNR, SSIM, run time
Model-driven deep learning117DehazingSOTS, NTIRE 2018PSNR, SSIM
Two-branch neural network122Non-homogeneous dehazingNH-Haze 2021 datasetPSNR, SSIM
FIBS-Unet (feature integration and block smoothing network)108Environment image dehazingRESIDEPSNR, SSIM
RefineDNet (refinement dehazing network)109Supervised dehazing approachesRESIDE-unpairedPSNR, SSIM
2022gUNet (gain-U-Net)123Image dehazingHaze4K, RESIDE and RS-HazePSNR, SSIM
DEA-Net (detail-enhanced attention network)118Single image dehazingRESIDE, ITS, OTS, SOTS, RTTS, and HSTSPSNR, SSIM
SRDefog (structure representation dense non-uniform fog)120Image dehazingD-Haze, I-Haze, O-Haze, and NH-HazePSNR, SSIM
EDN-GTM (encoder–decoder network with guided transmission map)121Single image dazingI-Haze, O-Haze, and NH-HazePSNR, SSIM
2023QCNN-H72Single image dehazingSOTS, I-Haze, O-HazePSNR, SSIM, NIQE, RI, VI
Deep CNN87Image dehazingDehazed image datasets (DHQ), O-Haze, and NH-HazePSNR, SSIM, NIQE, RI, VI
LIDN85Image dehazingI-Haze, O-Haze, NYU2 depthPSNR, SSIM, RI, VI

5.

Challenges and Discussions

Haze removal techniques are suitable for various vision-based applications. The limitations of those existing techniques are already mentioned in Tables 4Table 5Table 67. So, the dehazing process is insufficient to produce a clear vision in adverse weather conditions. A clear view can only be obtained through airlight estimation and by creating an environmental model based on different weather sensors. The following assumptions have been made in all the reviews on image dehazing methods for clear visualization in adverse weather.

  • 1. The essential assumption is to manage non-homogeneous weather. But this approximation is invalid for long-distance ranges (in km) or remote sensing, satellite, and telescopic imaging. Analysis of arbitrary scene structures in non-homogeneous weather is still an open problem.

  • 2. The computational complexity of current image dehazing methods is a significant problem for their implementation in real-time applications. Therefore, the development of fusion-based dehazing algorithms that are both highly accurate and computationally efficient is still a topic of research interest.

  • 3. While many image dehazing methods are currently available as learning-based methods, most are trained on custom-made datasets for outdoor scenes and may not perform well in complex backgrounds, changeable illumination, and different weather conditions (fog, rain, and haze). Hence, there is a need to improve real-time dehazing systems that can effectively handle the complexities associated with outdoor scenes.

  • 4. Deep learning models can achieve outstanding results when trained on a specific dataset, but their performance on other datasets with different haze types, lighting conditions, and noise levels is not satisfactory. Thus, there is a need to develop deep learning-based models that can achieve better performance on various datasets.

  • 5. Most of the learning-based dehazing models have been developed for light foggy conditions, and their performance may degrade significantly in dense foggy conditions. Therefore, there is a requirement to develop specific architectures that can effectively handle dense foggy weather conditions. This may be taken care of by fusion of the mmWave RADAR data with the camera features. The mmWave RADAR is applicable for carrying out improvement in the visibility range compared to the visual band. The proper design, alignment, and calibration of the composite setup are challenging. Different illumination zones are needed to be incorporated with the checkerboard setup for calibration. Some real-time haze removal techniques are used under different weather conditions and also different polarizer orientations. In the case of single images, such dehazing techniques will not provide accurate results without using sensor fusion. Fog removal from a single image is always an under-constrained problem due to the absence of airlight estimation. Enhancement and improvement of existing models can interpret better results for dynamic weather conditions. The use of pulsating light sources, along with the fusion of CCD, thermal images, and mmWave RADAR sensor, can strongly validate the scene interpretation in all adverse weather.

6.

Conclusions and Future Work

The progress of methods for removing haze from images is discussed in this study. Limitations and advantages for the removal of haze have been presented, which motivate future research. The haze removal technique is most suitable for many image processing areas of adverse weather conditions. Satellite imagery, intelligent transportation systems, underwater computer vision, image recognition, outdoor monitoring, object recognition, information extraction, and so on are some important broad areas where haze removal methods are used. This review article is divided into two major categories: single and multiple-image dehazing, which are further divided into two sub-categories. Single-image dehazing approaches are classified as non-learning and learning-based. However, multiple-image dehazing is categorized into polarization and scene depth. Furthermore, a step-by-step evaluation of standard methodologies is described for analyzing image dehazing and defogging performance. A survey of recently released image dehazing datasets is also summarized.

An in-depth review and experimental results will assist readers in understanding various dehazing approaches and will aid the creation of more advanced dehazing procedures. As a result, future research will concentrate on improving depth estimation and visual quality restoration. Fast and accurate estimation of airlight information increases the speed and perceptual image quality. CNN and GAN have significantly succeeded in several higher-level image-processing applications. Recent research works are not only based upon the atmospheric scattering model of airlight and attenuation. Still, they involve an end-to-end attention-based model to learn the direct mapping from hazy to dehazing images. However, current learning-based techniques are unable to restore the fine details of fog-based sky images, particularly in non-homogeneous foggy situations. In the future, two different deep neural networks will be combined with a transformer and end-to-end attention module for a clear scene of the hazy image without any feature loss.

Compliance with Ethical Standards

The authors have no conflicts of interest to declare relevant to the content of this article.

Data, Code, and Materials Availability Statement

The presented results of the article were created based on the real-time experiment using benchmark datasets such as NYU2, Make3D, RESIDE, HazeRD, SOTS, O-Haze, D-Hazy, I-Haze, and NH-Haze, which are publicly available and can be accessed by applying for the prior registration. Since, this article is an outcome of an ongoing R&D project work and owing to certain intellectual property right restrictions, we will be able to share the code of the experimental evaluation and relevant materials in a Github repository available at: https://github.com/sahadeb73 only after the effective completion of the project.

References

1. 

F. Hu et al., “Dehazing for images with sun in the sky,” J. Electron. Imaging, 28 043016 https://doi.org/10.1117/1.JEI.28.4.043016 JEIME5 1017-9909 (2019). Google Scholar

2. 

S. K. Nayar and G. Srinivasa Narasimhan, “Vision in bad weather,” in Proc. Seventh IEEE Int. Conf. Comput. Vis., 820 –827 (1999). https://doi.org/10.1109/ICCV.1999.790306 Google Scholar

3. 

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR 2000) (Cat. No. PR00662), 598 –605 (2000). https://doi.org/10.1109/CVPR.2000.855874 Google Scholar

4. 

F. G. Cozman and K. Eric, “Depth from scattering,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. and Pattern Recognit., 801 –806 (1997). https://doi.org/10.1109/CVPR.1997.609419 Google Scholar

5. 

J. A. Ibáñez, S. Zeadally and J. Contreras-Castillo, “Sensor technologies for intelligent transportation systems,” Sensors, 18 (4), 1212 https://doi.org/10.3390/s18041212 SNSRES 0746-9462 (2018). Google Scholar

6. 

Y. Dong et al., “Framework of degraded image restoration and simultaneous localization and mapping for multiple bad weather conditions,” Opt. Eng., 62 (4), 048102 https://doi.org/10.1117/1.OE.62.4.048102 (2023). Google Scholar

7. 

C. Dannheim et al., “Weather detection in vehicles by means of camera and LIDAR systems,” in Sixth Int. Conf. Comput. Intell., Commun. Syst. and Netw., 186 –191 (2014). https://doi.org/10.1109/CICSyN.2014.47 Google Scholar

8. 

A. M. Kurup and J. P. Bos, “Winter adverse driving dataset for autonomy in inclement winter weather,” Opt. Eng., 62 (3), 031207 https://doi.org/10.1117/1.OE.62.3.031207 (2023). Google Scholar

9. 

S. Yang, G. Cui and J. Zhao, “Remote sensing image uneven haze removal based on correction of saturation map,” J. Electron. Imaging, 30 (6), 063033 https://doi.org/10.1117/1.JEI.30.6.063033 JEIME5 1017-9909 (2021). Google Scholar

10. 

Z. Zheng et al., “Image restoration of hybrid time delay and integration camera system with residual motion,” Opt. Eng., 50 (6), 067012 https://doi.org/10.1117/1.3593156 (2011). Google Scholar

11. 

S. G. Narasimhan and S. K. Nayar, “Interactive (De)weathering of an image using physics models,” in IEEE Workshop on Color and Photometr. Methods in Comput. Vis., in Conjunction with ICCV, (2003). Google Scholar

12. 

J. He, C. Dong and Y. Qiao, “Modulating image restoration with continual levels via adaptive feature modification layers,” in IEEE/CVF Conf. Comput. Vis. and Pattern Recognit. (CVPR), 11048 –11056 (2019). https://doi.org/10.1109/CVPR.2019.01131 Google Scholar

13. 

H. Du, Y. Wei and B. Tang, “RRANet: low-light image enhancement based on Retinex theory and residual attention,” Proc. SPIE, 12610 126101Q https://doi.org/10.1117/12.2671262 PSISDG 0277-786X (2023). Google Scholar

14. 

C. Rablau, “LIDAR: a new (self-driving) vehicle for introducing optics to broader engineering and non-engineering audiences,” in Educ. and Train. in Opt. and Photonics, 11143_138 (2019). Google Scholar

15. 

R. P. Loce et al., “Computer vision in roadway transportation systems: a survey,” J. Electron. Imaging, 22 (4), 041121 https://doi.org/10.1117/1.JEI.22.4.041121 JEIME5 1017-9909 (2013). Google Scholar

16. 

H. R. Rasshofer, M. Spies and H. Spies, “Influences of weather phenomena on automotive laser radar systems,” Adv. Radio Sci., 9 49 –60 https://doi.org/10.5194/ars-9-49-2011 (2011). Google Scholar

17. 

N. Pinchon et al., “All-weather vision for automotive safety: which spectral band?,” in Advanced Microsystems for Automotive Applications 2018, (2019). https://doi.org/10.1007/978-3-319-99762-9_1 Google Scholar

18. 

K. L. Coulson, “Polarization of light in the natural environment,” Proc. SPIE, 1166 2 –10 https://doi.org/10.1117/12.962873 PSISDG 0277-786X (1989). Google Scholar

19. 

J. D. Jobson, Z. Rahman and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., 6 (7), 965 –976 https://doi.org/10.1109/83.597272 IIPRE4 1057-7149 (1997). Google Scholar

20. 

J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for degradation,” IEEE Trans. Image Process., 7 (2), 167 –179 https://doi.org/10.1109/83.660994 IIPRE4 1057-7149 (1998). Google Scholar

21. 

Y. H. Fu et al., “Single-frame-based rain removal via image decomposition,” in IEEE Int. Conf., 1453 –1456 (2011). https://doi.org/10.1109/ICASSP.2011.5946766 Google Scholar

22. 

T. W. Huang and G. M. Su, “Revertible guidance image based image detail enhancement,” in IEEE Int. Conf. Image Process. (ICIP), 1704 –1708 (2021). https://doi.org/10.1109/ICIP42928.2021.9506374 Google Scholar

23. 

J. L. Starck et al., “Morphological component analysis,” Proc. SPIE, 5914 209 –223 https://doi.org/10.1117/12.615237 PSISDG 0277-786X (2005). Google Scholar

24. 

D. A. Huang et al., “Context-aware single image rain removal,” in IEEE Int. Conf. on Multimedia and Expo (ICME), 164 –169 (2012). https://doi.org/10.1109/ICME.2012.92 Google Scholar

25. 

J. Xu et al., “Removing rain and snow in a single image using guided filter,” in IEEE Int. Conf. Comput. Sci. and Autom. Eng. (CSAE), 304 –307 (2012). https://doi.org/10.1109/CSAE.2012.6272780 Google Scholar

26. 

D. Y. Chen, C. C. Chen and L. W. Kang, “Visual depth guided image rain streaks removal via sparse coding,” in Int. Symp. Intell. Signal Process. and Commun. Syst., 151 –156 (2012). https://doi.org/10.1109/ISPACS.2012.6473471 Google Scholar

27. 

L. Zhang et al., “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imaging, 20 (2), 023016 https://doi.org/10.1117/1.3600632 JEIME5 1017-9909 (2011). Google Scholar

28. 

D. Eigen, D. Krishnan and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” in IEEE Int. Conf. Comput. Vis., 633 –640 (2013). https://doi.org/10.1109/ICCV.2013.84 Google Scholar

29. 

F. Sun et al., “Single-image dehazing based on dark channel prior and fast weighted guided filtering,” J. Electron. Imaging, 30 (2), 021005 https://doi.org/10.1117/1.JEI.30.2.021005 JEIME5 1017-9909 (2021). Google Scholar

30. 

Q. Zhang et al., “Dictionary learning method for joint sparse representation-based image fusion,” Opt. Eng., 52 (5), 057006 https://doi.org/10.1117/1.OE.52.5.057006 (2013). Google Scholar

31. 

Z. Chen, T. Ellis and S. A. Velastin, “Vision-based traffic surveys in urban environments,” J. Electron. Imaging, 25 (5), 051206 https://doi.org/10.1117/1.JEI.25.5.051206 JEIME5 1017-9909 (2016). Google Scholar

32. 

Z. Zhao and G. Feng, “Efficient algorithm for sparse coding and dictionary learning with applications to face recognition,” J. Electron. Imaging, 24 (2), 023009 https://doi.org/10.1117/1.JEI.24.2.023009 JEIME5 1017-9909 (2015). Google Scholar

33. 

N. Hautiere et al., “Blind contrast enhancement assessment by gradient rationing at visible edges,” Image Anal. Stereol. J., 27 (2), 87 –95 https://doi.org/10.5566/ias.v27.p87-95 (2008). Google Scholar

34. 

A. Kumar, R. K. Jha and N. K. Nishchal, “Dynamic stochastic resonance and image fusion based model for quality enhancement of dark and hazy images,” J. Electron. Imaging, 30 (6), 063008 https://doi.org/10.1117/1.JEI.30.6.063008 JEIME5 1017-9909 (2021). Google Scholar

35. 

L. Xuan and Z. Mingjun, “Underwater color image segmentation method via RGB channel fusion,” Opt. Eng., 56 (2), 023101 https://doi.org/10.1117/1.OE.56.2.023101 (2017). Google Scholar

36. 

K. He, J. Sun and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., 33 (12), 2341 –2353 https://doi.org/10.1109/TPAMI.2010.168 ITPIDJ 0162-8828 (2010). Google Scholar

37. 

S. Xu and X. P. Liu, “Adaptive image contrast enhancement algorithm for point-based rendering,” J. Electron. Imaging, 24 (2), 023033 https://doi.org/10.1117/1.JEI.24.2.023033 JEIME5 1017-9909 (2015). Google Scholar

38. 

A. K. Tripathi and S. Mukhopadhyay, “Single image fog removal using anisotropic diffusion,” IET Image Process., 6 (7), 966 –975 https://doi.org/10.1049/iet-ipr.2011.0472 (2012). Google Scholar

39. 

R. Fattal, “Single image dehazing,” in Int. Conf. on Comput. Graph. and Interactive Tech. Arch. ACM SIGGRAPH, 1 –9 (2008). Google Scholar

40. 

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or grey level image,” in IEEE Int. Conf. on Comput. Vis., 2201 –2208 (2009). https://doi.org/10.1109/ICCV.2009.5459251 Google Scholar

41. 

J. Zhang et al., “Local albedo-insensitive single image dehazing,” Vis. Comput., 26 (6–8), 761 –768 https://doi.org/10.1007/s00371-010-0444-z VICOE5 0178-2789 (2010). Google Scholar

42. 

M. J. Rakovic et al., “Light backscattering polarization patterns from turbid media: theory and experiment,” Appl. Opt., 38 (15), 3399 –408 https://doi.org/10.1364/AO.38.003399 APOPAI 0003-6935 (1999). Google Scholar

43. 

W. Zhang et al., “Review of passive polarimetric dehazing methods,” Opt. Eng., 60 (3), 030901 https://doi.org/10.1117/1.OE.60.3.030901 (2021). Google Scholar

44. 

R. Luzón-González, J. Nieves and J. Romero, “Recovering of weather degraded images based on RGB response ratio constancy,” Appl. Opt., 54 B222 –B231 https://doi.org/10.1364/AO.54.00B222 APOPAI 0003-6935 (2015). Google Scholar

45. 

Y. Wang and C. Fan, “Multiscale fusion of depth estimations for haze removal,” in IEEE Int. Conf. Digit. Signal Process. (DSP), 882 –886 (2015). https://doi.org/10.1109/ICDSP.2015.7252003 Google Scholar

46. 

Z. Rong and W. L. Jun, “Improved wavelet transform algorithm for single image dehazing,” Optik-Int. J. Light Electron. Opt., 125 (13), 3064 –3066 https://doi.org/10.1016/j.ijleo.2013.12.077 (2014). Google Scholar

47. 

F. A. Dharejo et al., “A color enhancement scene estimation approach for single image haze removal,” IEEE Geosci. Remote Sens. Lett., 17 (9), 1613 –1617 https://doi.org/10.1109/LGRS.2019.2951626 (2020). Google Scholar

48. 

G. Mandal, P. De and D. Bhattacharya, “A real-time fast defogging system to clear the vision of driver in foggy highway using minimum filter and gamma correction,” Sādhanā, 45 40 https://doi.org/10.1007/s12046-020-1282-y (2020). Google Scholar

49. 

C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput., 28 (6), 713 –721 https://doi.org/10.1007/s00371-012-0679-y VICOE5 0178-2789 (2012). Google Scholar

50. 

Z. Li et al., “Weighted guided image filtering,” IEEE Trans. Image Process., 24 (1), 120 –129 https://doi.org/10.1109/TIP.2014.2371234 IIPRE4 1057-7149 (2015). Google Scholar

51. 

I. Riaz et al., “Single image dehazing via reliability guided fusion,” J. Vis. Commun. Image Represent., 40 85 –97 https://doi.org/10.1016/j.jvcir.2016.06.011 JVCRE7 1047-3203 (2016). Google Scholar

52. 

F. Fang, F. Li and T. Zeng, “Single image dehazing and denoising: a fast variational approach,” SIAM J. Imaging Sci., 7 (2), 969 –996 https://doi.org/10.1137/130919696 (2014). Google Scholar

53. 

A. Galdran, J. Vazquez-Corral and D. Pardo, “Enhanced variational image dehazing,” SIAM J. Imaging Sci., 8 (3), 1519 –1546 https://doi.org/10.1137/15M1008889 (2015). Google Scholar

54. 

F. Guo, H. Peng and J. Tang, “Genetic algorithm-based parameter selection approach to single Image defogging,” Inf. Process. Lett., 116 (10), 595 –602 https://doi.org/10.1016/j.ipl.2016.04.013 IFPLAT 0020-0190 (2016). Google Scholar

55. 

Z. Sun, G. Bebis and R. Miller, “On-road vehicle detection: a review,” IEEE Trans. Pattern Anal. Mach. Intell., 28 (5), 694 –711 https://doi.org/10.1109/TPAMI.2006.104 ITPIDJ 0162-8828 (2006). Google Scholar

56. 

R. Singh, A. K. Dubey and R. Kapoor, “A review on image restoring techniques of bad weather images,” in IJCA Proc. Int. Conf. Comput. Syst. and Math. Sci., 23 –26 (2017). Google Scholar

57. 

S. M. Shorman and S. A. Pitchay, “A review of rain streaks detection and removal techniques for outdoor single image,” ARPN J. Eng. Appl. Sci., 11 (10), 6303 –6308 (2016). Google Scholar

58. 

J. G. Walker, P. C. Y. Chang and K. I. Hopcraft, “Visibility depth improvement in active polarization imaging in scattering media,” Appl. Opt., 39 4933 –4941 https://doi.org/10.1364/AO.39.004933 APOPAI 0003-6935 (1995). Google Scholar

59. 

H. Bu and J. P. Oakley, “Correction of simple contrast lost in color images,” IEEE Trans. Image Process., 16 (2), 511 –522 https://doi.org/10.1109/TIP.2006.887736 IIPRE4 1057-7149 (2007). Google Scholar

60. 

Q. Zhu, J. Mai and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Trans. Image Process., 24 (11), 3522 –3533 https://doi.org/10.1109/TIP.2015.2446191 IIPRE4 1057-7149 (2015). Google Scholar

61. 

T. Zhang and Y. Chen, “Single image dehazing based on improved dark channel prior,” Lect. Notes Comput. Sci., 9142 205 –212 https://doi.org/10.1007/978-3-319-20469-7_23 LNCSD9 0302-9743 (2015). Google Scholar

62. 

R. T. Tan, “Visibility in bad weather from a single image,” in IEEE Conf. Comput. Vis. and Pattern Recognit., 18 (2008). https://doi.org/10.1109/CVPR.2008.4587643 Google Scholar

63. 

B. Cai et al., “DehazedNet: an end-to-end system for single image haze removal,” IEEE Trans. Image Process., 25 (11), 5187C5198 https://doi.org/10.1109/TIP.2016.2598681 IIPRE4 1057-7149 (2016). Google Scholar

64. 

W. Ren, S. Liu and H. Zhang, “Single image dehazing via multiscale convolutional neural network,” Lect. Notes Comput. Sci., 9906 154 –169 https://doi.org/10.1007/978-3-319-46475-6_10 LNCSD9 0302-9743 (2016). Google Scholar

65. 

S. Dittmer, E. J. King and P. Maass, “Singular values for ReLU layers,” IEEE Trans. Neural Netw. Learn. Syst., 31 (9), 3594 –3605 https://doi.org/10.1109/TNNLS.2019.2945113 (2020). Google Scholar

66. 

P. Kaushik et al., “Design and analysis of high-performance real-time image dehazing using convolutional neural and generative adversarial networks,” Proc. SPIE, 12438 163 –170 https://doi.org/10.1117/12.2651023 PSISDG 0277-786X (2023). Google Scholar

67. 

B. Li et al., “All in one network for dehazing and beyond,” in ICCV, (2017). Google Scholar

68. 

A. Roy, L. D. Sharma and A. K. Shukla, “Multiclass CNN-based adaptive optimized filter for removal of impulse noise from digital images,” Vis. Comput., https://doi.org/10.1007/s00371-022-02697-7 VICOE5 0178-2789 (2022). Google Scholar

69. 

A. Roy et al., “Combination of adaptive vector median filter and weighted mean filter for removal of high-density impulse noise from colour images,” IET Image Process., 11 352 –361 https://doi.org/10.1049/iet-ipr.2016.0320 (2017). Google Scholar

70. 

D. Tian and Z. Shi, “MPSO: modified particle swarm optimization and its applications,” Swarm Evol. Comput., 41 49 –68 https://doi.org/10.1016/j.swevo.2018.01.011 (2018). Google Scholar

71. 

W. Ren et al., “Gated fusion network for single image dehazing,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit., 3253 –3261 (2018). https://doi.org/10.1109/CVPR.2018.00343 Google Scholar

72. 

V. Frants, S. Agaian and K. Panetta, “QCNN-H: single-image dehazing using quaternion neural networks,” IEEE Trans. Cybern., 53 (9), 5448 –5458 https://doi.org/10.1109/TCYB.3238640 (2023). Google Scholar

73. 

V. Frants and S. Agaian, “Weather removal with a lightweight quaternion Chebyshev neural network,” Proc. SPIE, 12526 125260V https://doi.org/10.1117/12.2664858 PSISDG 0277-786X (2023). Google Scholar

74. 

V. Frants, S. Agaian and K. Panetta, “QSAM-Net: rain streak removal by quaternion neural network with self-attention module,” IEEE Trans. Multimedia, https://doi.org/10.1109/TMM.2023.3271829 (2023). Google Scholar

75. 

A. Cariow and G. Cariowa, “Fast algorithms for quaternion-valued convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst., 32 (1), 457 –462 https://doi.org/10.1109/TNNLS.2020.2979682 (2021). Google Scholar

76. 

A. P. Giotis, G. Retsinas and C. Nikou, “Quaternion generative adversarial networks for inscription detection in Byzantine monuments,” in Proc. Pattern Recognit. ICPR Int. Workshops Challenges, 171 –184 (2021). https://doi.org/10.1007/978-3-030-68787-8_12 Google Scholar

77. 

A. Zhu et al., “Zero-shot restoration of underexposed images via robust retinex decomposition,” in IEEE Int. Conf. Multimedia and Expo (ICME), 1 –6 (2020). https://doi.org/10.1109/ICME46284.2020.9102962 Google Scholar

78. 

P. Wang et al., “Parameter estimation of image gamma transformation based on zero-value histogram bin locations,” Signal Process. Image Commun., 64 33 –45 https://doi.org/10.1016/j.image.2018.02.011 SPICEF 0923-5965 (2018). Google Scholar

79. 

H. Zhou et al., “Image illumination adaptive correction algorithm based on a combined model of bottom-hat and improved gamma transformation,” Arab. J. Sci. Eng., 48 3947 –3960 https://doi.org/10.1007/s13369-022-07368-2 (2023). Google Scholar

80. 

C. H. Yeh et al., “Single image dehazing via deep learning based image restoration,” in Proc. ASIPA Annu. Summit and Conf., (2018). https://doi.org/10.23919/APSIPA.2018.8659733 Google Scholar

81. 

S. Shit et al., “Real-time emotion recognition using end-to-end attention-based fusion network,” J. Electron. Imaging, 32 (1), 013050 https://doi.org/10.1117/1.JEI.32.1.013050 JEIME5 1017-9909 (2023). Google Scholar

82. 

S. Shit et al., “Encoder and decoder-based feature fusion network for single image dehazing,” in 3rd Int. Conf. Artif. Intell. and Signal Process. (AISP), 1 –5 (2023). https://doi.org/10.1109/AISP57993.2023.10135067 Google Scholar

83. 

X. Li, Z. Hua and J. Li, “Attention-based adaptive feature selection for multi-stage image dehazing,” Vis. Comput., 39 663 –678 https://doi.org/10.1007/s00371-021-02365-2 VICOE5 0178-2789 (2023). Google Scholar

84. 

Y. Zhou et al., “AFF-dehazing: attention-based feature fusion network for low-light image dehazing,” Comput. Animation Virtual Worlds, 32 (3–4), e2011 https://doi.org/10.1002/cav.2011 (2021). Google Scholar

85. 

A. Ali, A. Ghosh and S. S. Chaudhuri, “LIDN: a novel light invariant image dehazing network,” Eng. Appl. Artif. Intell., 126, Part A 106830 https://doi.org/10.1016/j.engappai.2023.106830 EAAIE6 0952-1976 (2023). Google Scholar

86. 

W. Huang and Y. Wei, “Single image dehazing via color balancing and quad-decomposition atmospheric light estimation,” Optik, 275 170573 https://doi.org/10.1016/j.ijleo.2023.170573 OTIKAJ 0030-4026 (2023). Google Scholar

87. 

X. Lv et al., “Blind dehazed image quality assessment: a deep CNN-based approach,” IEEE Trans. Multimedia, https://doi.org/10.1109/TMM.2023.3252267 (2023). Google Scholar

88. 

S. Yin et al., “Adams-based hierarchical features fusion network for image dehazing,” Neural Netw., 163 379 –394 https://doi.org/10.1016/j.neunet.2023.03.021 NNETEB 0893-6080 (2023). Google Scholar

89. 

Y. Liu and X. Hou, “Local multi-scale feature aggregation network for real-time image dehazing,” Pattern Recognit., 141 109599 https://doi.org/10.1016/j.patcog.2023.109599 (2023). Google Scholar

90. 

D. Zhou et al., “MCRD-Net: an unsupervised dense network with multi-scale convolutional block attention for multi-focus image fusion,” IET Image Process., 16 1558 –1574 https://doi.org/10.1049/ipr2.12430 (2022). Google Scholar

91. 

Q. Qi, “A multi-path attention network for non-uniform blind image deblurring,” Multimedia Tools Appl., https://doi.org/10.1007/s11042-023-14470-6 (2023). Google Scholar

92. 

J. Go and J. Ryu, “Spatial bias for attention-free non-local neural networks,” (2023). Google Scholar

93. 

D. Lei et al., “SCRNet: an efficient spatial channel attention residual network for spatiotemporal fusion,” J. Appl. Remote Sens., 16 (3), 036512 https://doi.org/10.1117/1.JRS.16.036512 (2022). Google Scholar

94. 

F. Yu et al., “Deep layer aggregation,” in Proc. of the IEEE Conf. on Comput. Vis. and Pattern Recognit., 2403 –2412 (2018). https://doi.org/10.1109/CVPR.2018.00255 Google Scholar

95. 

I. Goodfellow et al., “Generative adversarial nets,” in NIPS, 2672C2680 (2014). Google Scholar

96. 

X. Su et al., “Enhancing haze removal and super-resolution in real-world images: a cycle generative adversarial network-based approach for synthesizing paired hazy and clear images,” Opt. Eng., 62 (6), 063101 https://doi.org/10.1117/1.OE.62.6.063101 (2023). Google Scholar

97. 

P. Isola et al., “Image to image translation with conditional adversarial networks,” in CVPR, 5967 –5967 (2017). https://doi.org/10.1109/CVPR.2017.632 Google Scholar

98. 

L. Jie et al., “WaterGAN: unsupervised generative network to enable real-time color correction of monovular underwater images,” (2017). Google Scholar

99. 

Z. Zhang, Y. Xie and L. Yang, “Photographic text-to image synthesis with a hierarchically nested adversarial network,” in Conf. Comput. Vis. and Pattern Recognit., (2018). https://doi.org/10.1109/CVPR.2018.00649 Google Scholar

100. 

H. Zhu et al., “DehazeGAN: when image dehazing meets differential programming,” in Proc. Twenty-Seventh Int. Joint Conf. Artif. Intell. (IJCAI-18), (2018). Google Scholar

101. 

D. Engin, A. Genc and H. K. Eknel, “Cycle dehaze: enhanced CycleGAN for single image dehazing,” in Proc. IEEE Conf. Comput. Vis. and Pattern Recognit. Workshops, 825 –833 (2018). https://doi.org/10.1109/CVPRW.2018.00127 Google Scholar

102. 

Y. Qu et al., “Enhanced Pix2Pix dehazing network,” in IEEE Conf. Comput. Vis. and Pattern Recognit. (CVPR), 8160 –8168 (2019). https://doi.org/10.1109/CVPR.2019.00835 Google Scholar

103. 

X. Chen et al., “InfoGAN: interpretable representation learning by information maximizing generative adversarial nets,” (2016). Google Scholar

104. 

X. Huang et al., “Multimodal unsupervised image to image translation,” in Comput. Vis. and Pattern Recognit., (2018). https://doi.org/10.1109/IJCNN55064.2022.9892018 Google Scholar

105. 

D. Yang and J. Sun, “Proximal dehaze-net: a prior learning-based deep network for single image dehazing,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 702 –717 (2018). Google Scholar

106. 

X. Zhang et al., “Single image dehazing via dual-path recurrent network,” IEEE Trans. Image Process., 30 5211 –5222 https://doi.org/10.1109/TIP.2021.3078319 IIPRE4 1057-7149 (2021). Google Scholar

107. 

Y. Ding and S. Guo, “Conditional generative adversarial networks: introduction and application,” Proc. SPIE, 12348 258 –266 https://doi.org/10.1117/12.2641409 PSISDG 0277-786X (2022). Google Scholar

108. 

M. H. Sheu et al., “FIBS-Unet: feature integration and block smoothing network for single image dehazing,” IEEE Access, 10 71764 –71776 https://doi.org/10.1109/ACCESS.2022.3188860 (2022). Google Scholar

109. 

S. Zhao et al., “RefineDNet: a weakly supervised refinement framework for single image dehazing,” IEEE Trans. Image Process., 30 3391 –3404 https://doi.org/10.1109/TIP.2021.3060873 IIPRE4 1057-7149 (2021). Google Scholar

110. 

H. H. Yang and Y. Fu, “Wavelet U-net and the chromatic adaptation transform for single image dehazing,” in IEEE Int. Conf. Image Process. (ICIP), 2736 –2740 (2019). https://doi.org/10.1109/ICIP.2019.8803391 Google Scholar

111. 

K. Yuan et al., “Single image dehazing via NIN-DehazeNet,” IEEE Access, 7 181348 –181356 https://doi.org/10.1109/ACCESS.2019.2958607 (2019). Google Scholar

112. 

H. H. Yang, C. H. H. Yang and Y. C. J. Tsai, “Y-Net: multi-scale feature aggregation network with wavelet structure similarity loss function for single image dehazing,” in ICASSP 2020-2020 IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), 2628 –2632 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053920 Google Scholar

113. 

T. Wang et al., “Haze concentration adaptive network for image dehazing,” Neurocomputing, 439 75 –85 https://doi.org/10.1016/j.neucom.2021.01.042 NRCGEO 0925-2312 (2021). Google Scholar

114. 

L. Li et al., “Semi-supervised image dehazing,” IEEE Trans. Image Process., 29 2766 –2779 https://doi.org/10.1109/TIP.2019.2952690 IIPRE4 1057-7149 (2019). Google Scholar

115. 

Z. Deng et al., “Deep multi-model fusion for single-image dehazing,” in Proc. IEEE Int. Conf. Comput. Vis., 2453 –2462 (2019). https://doi.org/10.1109/ICCV.2019.00254 Google Scholar

116. 

T. Guo and V. Monga, “Reinforced depth-aware deep learning for single image dehazing,” in ICASSP 2020-2020 IEEE Int. Conf. Acoust., Speech and Signal Process. (ICASSP), 8891 –8895 (2020). https://doi.org/10.1109/ICASSP40776.2020.9054504 Google Scholar

117. 

D. Yang and J. Sun, “A model-driven deep dehazing approach by learning deep priors,” IEEE Access, 9 108542 –108556 https://doi.org/10.1109/ACCESS.2021.3101319 (2021). Google Scholar

118. 

Z. Chen, Z. He and Z. Lu, “DEA-Net: single image dehazing based on detail-enhanced convolution and content-guided attention,” (2023). Google Scholar

119. 

B. Li et al., “You only look yourself: unsupervised and untrained single image dehazing neural network,” Int. J. Comput. Vis., 129 (5), 1754 –1767 https://doi.org/10.1007/s11263-021-01431-5 IJCVEQ 0920-5691 (2021). Google Scholar

120. 

Y. Jin et al., “Structure representation network and uncertainty feedback learning for dense non-uniform fog removal,” in Asian Conf. Comput. Vis., (2022). https://doi.org/10.1007/978-3-031-26313-2_10 Google Scholar

121. 

L. Tran, S. Moon and D. Park, “A novel encoder-decoder network with guided transmission map for single image dehazing,” (2022). Google Scholar

122. 

Y. Yu et al., “A two-branch neural network for nonhomogeneous dehazing via ensemble learning,” in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit., 193 –202 (2021). https://doi.org/10.1109/CVPRW53098.2021.00028 Google Scholar

123. 

Y. Song et al., “Rethinking performance gains in image dehazing networks,” (2022). Google Scholar

124. 

S. Chaurasia and B. S. Gohil, “Detection of day time fog over India using INSAT-3D data,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 8 (9), 4524 –4530 https://doi.org/10.1109/JSTARS.2015.2493000 (2015). Google Scholar

125. 

R. Malik, “Modeling and accident analysis on NH-10 (INDIA),” Int. J. Eng. Manage. Res., 5 (2), 880 –882 (2015). Google Scholar

126. 

M. K. Mondal et al., “Design and development of a fog-assisted elephant corridor over a railway track,” Sustainability, 15 (7), 5944 https://doi.org/10.3390/su15075944 (2023). Google Scholar

127. 

M. Srivastava, P. Dixit and P. Ranjan, “Accident detection using fog computing,” in 6th Int. Conf. Inf. Syst. and Comput. Netw. (ISCON), 1 –5 (2023). https://doi.org/10.1109/ISCON57294.2023.10111980 Google Scholar

128. 

G. Mahendra and H. R. Roopashree, “Prediction of road accidents in the different states of India using machine learning algorithms,” in IEEE Int. Conf. Integr. Circuits and Commun. Syst. (ICICACS), 1 –6 (2023). https://doi.org/10.1109/ICICACS57338.10099519.2023 Google Scholar

129. 

A. Roy, L. Manam and R. H. Laskar, “Region adaptive fuzzy filter: an approach for removal of random-valued impulse noise,” IEEE Trans. Ind. Electron., 65 (9), 7268 –7278 https://doi.org/10.1109/TIE.2018.2793225 (2018). Google Scholar

130. 

A. Roy et al., “Removal of impulse noise for multimedia-IoT applications at gateway level,” Multimedia Tools Appl., 81 34463 –34480 https://doi.org/10.1007/s11042-021-11832-w (2022). Google Scholar

131. 

A. Mittal, R. Soundararajan and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Process. Lett., 20 (3), 209 –212 https://doi.org/10.1109/LSP.2012.2227726 IESPEJ 1070-9908 (2013). Google Scholar

132. 

L. Zhang, Y. Shen and H. Li, “VSI: a visual saliency-induced index for perceptual image quality assessment,” IEEE Trans. Image Process., 23 (10), 4270 –4281 https://doi.org/10.1109/TIP.2014.2346028 IIPRE4 1057-7149 (2014). Google Scholar

133. 

W. E. K. Middleton, “Vision through the atmosphere,” Geophysik II/Geophysics II, Springer, Berlin, Germany (1957). Google Scholar

134. 

A. Kar et al., “Zero-shot single image restoration through controlled perturbation of Koschmieder’s model,” in IEEE/CVF Conf. Comput. Vis. and Pattern Recognit. (CVPR), 16200 –16210 (2021). https://doi.org/10.1109/CVPR46437.2021.01594 Google Scholar

135. 

X. Yan et al., “Underwater image dehazing using a novel color channel based dual transmission map estimation,” Multimedia Tools Appl., 1 –24 https://doi.org/10.1007/s11042-023-15708-z (2023). Google Scholar

136. 

Y. Gao, W. Xu and Y. Lu, “Let you see in haze and sandstorm: two-in-one low-visibility enhancement network,” IEEE Trans. Instrum. Meas., 72 5023712 https://doi.org/10.1109/TIM.2023.3304668 IEIMAO 0018-9456 (2023). Google Scholar

137. 

Y. Guo et al., “Haze visibility enhancement for promoting traffic situational awareness in vision-enabled intelligent transportation,” IEEE Trans. Veh. Technol., 1 –15 https://doi.org/10.1109/TVT.2023.3298041 (2023). Google Scholar

138. 

S. Zhao et al., “Dehazing evaluation: real-world benchmark datasets, criteria, and baselines,” IEEE Trans. Image Process., 29 6947 –6962 https://doi.org/10.1109/TIP.2020.2995264 IIPRE4 1057-7149 (2020). Google Scholar

139. 

L. Sun et al., “Adaptive image dehazing and object tracking in UAV videos based on the template updating Siamese network,” IEEE Sens. J., 23 (11), 12320 –12333 https://doi.org/10.1109/JSEN.2023.3266653 ISJEAZ 1530-437X (2023). Google Scholar

140. 

S. Tian et al., “DHIQA: quality assessment of dehazed images based on attentive multi-scale feature fusion and rank learning,” Displays, 79 102495 https://doi.org/10.1016/j.displa.2023.102495 DISPDP 0141-9382 (2023). Google Scholar

141. 

G. Verma, M. Kumar and S. Raikwar, “FCNN: fusion-based underwater image enhancement using multilayer convolution neural network,” J. Electron. Imaging, 31 (6), 063039 https://doi.org/10.1117/1.JEI.31.6.063039 JEIME5 1017-9909 (2022). Google Scholar

142. 

M. Guo et al., “DFBDehazeNet: an end-to-end dense feedback network for single image dehazing,” J. Electron. Imaging, 30 (3), 033004 https://doi.org/10.1117/1.JEI.30.3.033004 JEIME5 1017-9909 (2021). Google Scholar

143. 

Q. Wang et al., “Variant-depth neural networks for deblurring traffic images in intelligent transportation systems,” IEEE Trans. Intell. Transport. Syst., 24 (6), 5792 –5802 https://doi.org/10.1109/TITS.2023.3255839 (2023). Google Scholar

144. 

A. Filin, I. Gracheva and A. Kopylov, “Haze removal method based on joint transmission map estimation and atmospheric-light extraction,” in Proc. 4th Int. Conf. Future Netw. and Distrib. Syst., (2020). Google Scholar

145. 

X. Liu, T. Zhang and J. Zhang, “Toward visual quality enhancement of dehazing effect with improved Cycle-GAN,” Neural Comput. Appl., 35 5277 –5290 https://doi.org/10.1007/s00521-022-07964-1 (2023). Google Scholar

146. 

L. Zhang et al., “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process., 20 (8), 2378 –2386 https://doi.org/10.1109/TIP.2011.2109730 IIPRE4 1057-7149 (2011). Google Scholar

147. 

A. Filin et al., “Hazy images dataset with localized light sources for experimental evaluation of dehazing methods,” in Proc. 6th Int. Workshop on Deep Learn. in Comput. Phys.— PoS (DLCP), (2022). Google Scholar

148. 

Y. Shao et al., “Domain adaptation for image dehazing,” in Proc. IEEE/CVF Conf. Comput. Vis. and Pattern Recognit., 2808 –2817 (2020). https://doi.org/10.1109/CVPR42600.2020.00288 Google Scholar

149. 

X. Zhang et al., “Pyramid channel-based feature attention network for image dehazing,” Comput. Vis. Image Underst., 197 103003 https://doi.org/10.1016/j.cviu.2020.103003 CVIUF4 1077-3142 (2020). Google Scholar

150. 

R. Jing et al., “Cloud removal for optical remote sensing imagery using the SPA-CycleGAN network,” J. Appl. Remote Sens., 16 (3), 034520 https://doi.org/10.1117/1.JRS.16.034520 (2022). Google Scholar

Biography

Sahadeb Shit currently serves as a PhD scholar at AcSIR (CSIR-CMERI) in Durgapur, West Bengal. He received his MTech degree in telecommunication engineering from MAKAUT, West Bengal, in 2015, and his BTech degree in electronics and communication engineering from WBUT, West Bengal, in 2013. His primary areas of interest encompass computer vision, image processing, machine learning, and sensor fusion.

Dip Narayan Ray currently holds the position of Sr. Pr. Scientist at CSIR-CMERI, located in Durgapur, West Bengal. He received his bachelor’s degree in mechanical engineering from NIT Durgapur in 2002 and later achieved his PhD in mechanical engineering from the same institution in 2012. His professional expertise lies in the fields of machine vision, image processing, robotics, and machine learning.

© 2023 SPIE and IS&T
Sahadeb Shit and Dip Narayan Ray "Review and evaluation of recent advancements in image dehazing techniques for vision improvement and visualization," Journal of Electronic Imaging 32(5), 050901 (26 September 2023). https://doi.org/10.1117/1.JEI.32.5.050901
Received: 3 May 2023; Accepted: 8 September 2023; Published: 26 September 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image enhancement

Air contamination

Image quality

Image restoration

Fiber optic gyroscopes

Image filtering

Adverse weather

RELATED CONTENT


Back to Top