Regular Articles

Modeling a color-rendering operator for high dynamic range images using a cone-response function

[+] Author Affiliations
Ho-Hyoung Choi

Chungbuk National University, School of Information and Communication Engineering, Chungdae-ro 1, Seowon-Gu, Cheongju, Chungbuk 362-763, Republic of Korea

Gi-Seok Kim

Gyeongju University, School of Electrical Energy and Computer Engineering, Taejong-ro, Gyeongjusi 780-712, Republic of Korea

Byoung-Ju Yun

Kyungpook National University, School of Electronics Engineering, College of IT Engineering, 80 Daehak-ro, Buk-gu, Daegu 702-701, Republic of Korea

J. Electron. Imaging. 24(5), 053005 (Sep 10, 2015). doi:10.1117/1.JEI.24.5.053005
History: Received December 2, 2014; Accepted August 7, 2015
Text Size: A A A

Open Access Open Access

Abstract.  Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

Figures in this Article

High dynamic range (HDR) imaging is a photographic technology that assembles and saves photographs of a static scene taken at different exposures in a radiance map, similar to the process employed by the human eye. HDR imaging has been developed based on visual adaptation and is a technique that mimics the human eye. The human eye readily captures a range of light intensities in multiexposure time and has a dynamic range of 6 orders of magnitude. Conversely, low dynamic range (LDR) devices have a range of 2 to 3 orders of magnitude.1,2 This leads to a problem termed tone mapping or tone reproduction3,4 when displaying HDR data on LDR display devices. To display the original scene using the HDR representation on display devices with LDR, it is necessary to summarize the intensity range of the original image using the single extreme ratio between the maximum and minimum intensities.5 This ratio is the dynamic range of the image.

The goal of the color-rendering process is to maximize the perceived similarity between the actual scene and the displayed image. Therefore, tone-reproduction algorithms attempt to scale HDR data in order that the resulting displayable image preserves the characteristics of the input data, such as brightness, visibility, contrast, and appearance by controlling the scale of the given image. To address this issue, adaptive scale-gain retinex was proposed by Kotera and Fujita.6 In this model, to maintain the color balance, the surround image generated from only the luminance component was used for the R, G, and B channels. They also proposed an automatic adapted scale-gain weight-setting method. However, the computation of weights consumed a significant amount of time for a large Gaussian kernel size because it required a histogram luminance single-scale retinex corresponding to multiple scales. Therefore, the integrated-surround retinex method, which used only the relative luminance Y component of the YIQ representation instead of the three RGB channels, was proposed by Wang et al.7 The result was stable and demonstrated high saturation as the local contrast of luminance preserving the color balance was enhanced using a multiscaled process. However, both methods were based on a slow gradient of the integrated surround in the center/surround process, thereby introducing halo artifacts. Further, by employing a single Gaussian filter, the chromaticity of illumination could not be removed, thus, the enhanced saturation was unnatural when compared with the original image. A local adaptation retinal model (LARM) was proposed by Wang et al.8 based on the retinal model. LARM adopted a bilateral filter to reduce the halo artifacts (or ringing effects) in the resulting image. The local contrast was enhanced without halo artifacts compared to the original image. However, LARM was based on a nonlinear bilateral filter and a single luminance image to compute the surround image. Therefore, the local contrast in the dark regions could not be enhanced in the resulting image, thus the detail in the resulting image was difficult to distinguish with the human visual system (HVS).

Kuang et al.9 proposed a new image-appearance model based on the HVS called iCAM06. It was developed for HDR image rendering. Cone and rod-response predictions addressed the mismatch between the displayed image and the rendered image using a chromatic-adaptation transformation in iCAM06. However, the luminance factor reduced the luminance adaptation in the cone-response function. Furthermore, stimuli of higher luminance than that of the adopted white could cause the response to approach the maximum level and consequently reduced the colorfulness.10

This paper presents a color-rendering model comprised of a tone-mapping operator and the cone-response function. This method is primarily processed in the XYZ tristimulus space of the input image. This method can manage the aforementioned problems that exist with the conventional methods. In addition, the linearity in the proposed tone mapping method is an important task to consider with one-to-one correspondence between the displayed and rendered images. Although the proposed method does not have a linearity, the goal of the proposed tone mapping method is to have a near linearity. In addition, the cone response function, which is based on human visual perception, is to address the color shift and color leakage by modifying the surround luminance factor (F=1 for average surround, and F=0.8 for dim or dark surround) in Ref. 1 and iCAM06, respectively. By using the modified luminance surround factor, a wide variety of surround conditions is automatically addressed to achieve excellent results for color constancy. In Ref. 1, the global tone mapping operator is just used to deal with the tone mapping problem and heavy time cost. Therefore, the method cannot address the local detailed variance information in the given image. The local standard deviation is used to overcome this problem. These issues are the noticeably main differences compared to Ref. 1 and iCAM06. Hence, the proposed method is capable of managing the problems such as color shift, color leakage, and color cast.

The tone-mapping operator produces visibility and an overall impression of brightness, contrast, and color in the given HDR image when mapped onto relatively LDR displays or printers. The tone-mapping image is obtained using chromatic (XYZ tristimulus value) and achromatic (absolute luminance Y component) colors. The adaptation can be considered equivalent to the dynamic mechanism of the HVS, which optimizes the visual response for a particular viewing condition. Dark and light adaptations refer to changes in visual sensitivity when the level of illumination is decreased and increased, respectively. Chromatic adaptation preserves the approximate appearance of an object. It can be described as an independent sensitivity regulation of the three cones’ responses.11 Therefore, a chromatic adaptive transformation (CAT) method is adopted to address the perceptual mismatch between the actual scene and the displayed image. Several color appearance phenomena based on the HVS, such as the Hunt, Steven and Helson-Judd, and simultaneous contrast effects, can be predicted by the CAT method.12,13

Chromatic adaptation is the ability of the HVS to adjust to illumination changes and to preserve the color appearance of the object.14 It allows us to see stable object colors illuminated under a wide range of different illuminations. CATs are used in digital imaging and color science to model the described mechanism of the HVS. They provide a means to transform color values under a source illumination into color values under a target illumination.

A standard model to compute the transformation from one illumination to another is the diagonal von Kries adaptation model.15 If (R;G;B) denotes a color value under a source illumination, then the von Kries model states that we can model the same color value under a target illumination as Display Formula

[Rt0Gt0Bt0]=[c_R000c_G000c_B][Rs0Gs0Bs0],(1)
where c_R, c_G, and c_B represent scaling coefficients for the color channels. These scaling coefficients are generally the ratios of target illumination (Rt,Gt,Bt) and source illumination (Rs,Gs,Bs), i.e., c_R=Rt/Rs, c_G=Gt/Gs, and c_B=Bt/Bs. CATs, however, there is a difference in the color space in which this scaling occurs. Rt0, Gt0, Bt0 and Rs0, Gs0, Bs0 are the final target illumination and the initial input source illumination.

The obvious choice is the color space in which the image is initially described, such as the sRGB color space. This process is straightforward as no additional transformations of the color space are required. Other commonly used color spaces are derived as linear transformations of the XYZ tristimulus space. Some of these alternatives are called sensor sharpening.16 Color-matching functions of the derived color spaces tend to produce sharper, narrower peaks, more appropriate for the von Kries model. There are several basic models for transforming color values in a derived color space including XYZ, Bradford, Sharp, and CMCCAT2000. All of the described transformations implement the diagonal von Kries model-to-model illuminant change.

The input data [f(x,y)] for the proposed method has a CIE tristimulus value X, Y and Z) in absolute luminance units. The absolute luminance Y of the image data is necessary to predict the various luminance-dependent phenomena such as the Hunt and the Stevens effects. An HDR image input is typically a floating point RGB image with a linear absolute luminance. The captured Rs, Gs, and Bs values are converted to the CIE XYZ color space using:9,13,17Display Formula

[XYZ]=[0.41240.21270.01930.35760.71520.11920.18050.07220.9504][RsGsBs],(2)
where the Rs, Gs, and Bs values refer to the sRGB of the input image.

Tone-Mapping Method

Conventional methods for tone mapping are generally performed using spatial filters such as Gaussian and bilateral. Then the illumination components and the local adaptation level are estimated. However, methods based on spatial filters are known to yield inferior color constancy.14,18 For example, a halo artifact appears in the resulting image. To avoid color distortions, tone mapping is implemented without the gray world assumption. Therefore, the R, G, and B components of the color space are used in the previous tone-mapping work as mentioned in Ref. 19 and consequently, the tone-mapping method cannot address the various HVS-based luminance-dependent phenomena.1 Therefore, the CIEXYZ tristimulus values (X, Y, and Z) are used in the proposed method. The tone-mapping method used in the previous one as the global tone-mapping method leads to color distortion. Moreover, it is used to render the color in the given image using just a global color rendering operator. However, the proposed method has a different tone-mapping operator compared to the previous one. Thus, the linearity is the important property in the tone mapping process, now that it is capable of addressing the one-to-one correspondence between the rendered image and the real world scene. In the effort, consequentially, the proposed method is of near linearity compared with the previous one, even though a nonlinear equation is used to control the dynamic range in the given image. In addition, the proposed color rendering operator is used to represent the local detailed variance information in the given image, adding a local standard deviation, which was not in the previous one. The input image is obtained using a nonlinear power function for the luminance (Y component) of the CIEXYZ tristimulus values. The nonlinear power function is adopted to control the dynamic range and to remove the halo artifact instead of using the spatial filter to estimate a white point. Therefore, for luminance Y[fY(x,y)], the tone cover of the achromatic component is defined as follows: Display Formula

Lout(x,y)=[fY(x,y)]α,(3)
where Lout(x,y) is the output image for the absolute luminance Y and α is the gamma coefficient to control the dynamic range.

The product of Eq. (3) is used by the tone-mapping function to preserve luminance, expressed in cd/m2. This involves a linear interpolation between the chromatic and the corresponding achromatic colors. The proposed tone-mapping method is described using the input image fi(x,y) as follows: Display Formula

C(out,i)(x,y)=[fi(x,y)][1+Lout(x,y)]+σi(x,y);i=X,Y,Z,(4)
where Cout,i(x,y), i={X,Y,Z} are the resulting tone-mapped images of the X, Y, and Z components, expressed in cd/m2 and the local standard deviation [σi(x,y)] is used to describe changes in the entire image The local standard deviation is based on the 3×3 mask for each R, G, and B channel in the given image, and it is used to compensate the global operator by incorporating the local variance information. As mentioned, the conventional color rendering method with linearity is based on the gray world assumption. Hence, the resultant image leads to poor color constancy such as halos, graying-out, and dominated color, which is caused by increasing the hue values in the resulting image. Then the color cast is found in the entire resulting image after correcting the color. Even though the nonlinearity is used to correct the color so as to avoid these problems in the conventional methods, the objective of Eq. (4) is to ensure near linearity but is not a linear equation. It is modified from a portion of the tone-mapping method described in Ref. 1. The characteristic curve is shown in Fig. 1. The resulting image has linearity comparable, approximately, to that of the previous tone-mapping method.

Graphic Jump Location
Fig. 1
F1 :

Curve of characteristic based on Eq. (4) with (a) X value of tristimulus values, (b) Y value of tristimulus value, and (c) Z value of tristimulus values.

Proposed Chromatic-Adaptation Transformation

The tone-mapping method was discussed in the previous subsection. To address the perceptual mismatches between the original scene and displayed images from the resulting image based on Eq. (4), the chromatic-adaptation transform has been adopted in the proposed method. The color appearance model (CAM) is used to estimate the cone response using the cone-response function in Ref. 9. However, there is a problem in the resulting image related to a dominated color. Therefore, a modified CAM model is introduced to address the mismatch between the displayed image and the rendered image based on HVP. The proposed transformation is obtained as follows: Display Formula

[RGB]=M[Cout,XCout,YCout,Z]and[RwGwBw]=M[XwYwZw],(5)
Display Formula
M=[0.79820.33890.13710.59181.55120.04060.00080.02390.9753],(6)
where Rw, Gw, Bw, and Xw, Yw, Zw values represent the corresponding R, G, B and Cout,X, Cout,Y, Cout,Z values for the white point, respectively. From Eq. (5), three cones’ responses based on the HVS are estimated as follows: Display Formula
Rc=R[Doos(Yw/Yrw)(Rrw/Rw)+1Doos],(7)
Display Formula
Gc=G[Doos(Yw/Yrw)(Grw/Gw)+1Doos],(8)
Display Formula
Bc=B[Doos(Yw/Yrw)(Brw/Bw)+1Doos],(9)
where Display Formula
La=Lw/5,(10)
Display Formula
Fl=0.2·T0.5·(Lw)+0.1·(1T0.5)·(Lw)1/3,(11)
Display Formula
T=1/(Lw+1),(12)
Display Formula
Doos=0.3·Fl·[1(1/3.6)]·exp[(La42)/92],(13)
where Doos is the incomplete white point adaptation factor and is computed as the function of the adaptation luminance La (20% of the adaptation white). Fl (luminance surround factor) is used to address the diverse variety of the actual scene. Rc, Gc, and Bc values are the chromatically adapted cone responses from applying Doos. In the proposed method, Xw, Yw, and Zw values can be incorporated as user parameters or can be estimated from the given data, as shown in Table 1. Rrw, Grw, Brw and Xrw, Yrw, and Zrw values are reference whites in the white condition, respectively. From Rc, Gc, and Bc values, R, G, and B values are calculated as follows: Display Formula
[RGB]=MHPEMCAT021[RcGcBc],(14)
Display Formula
MHPE=[0.389710.688980.078680.229811.183400.046410.00.01.0],(15)
Display Formula
MCAT021=[1.0961240.2788690.1827450.4543690.4735330.0720980.0096280.0056981.015326],(16)
where MHPE is a single transform matrix from the sharpened cone responses to the Hunt-Pointer-Estevez cone responses.20

Table Grahic Jump Location
Table 1Examples of the relative tristimulus values of the white points and their associated color temperatures T.9

In Eq. (14), R,G, and B values are the chromatic-adaptation values that are obtained using the inverse MCAT021 transform. Subsequently, they are converted to the HPE space before performing the postadaptation nonlinear compression. The CIECAM02, such as CIECAM97, uses a certain color space in the chromatic-adaptation transform and another for computing the correlation between the perceptual attributes. CIECAM02 exhibits the best performance in predicting the chromatic adaptation of the image with some degree of sharpening. It should be noted that the CAT02 method also incorporates some degree of sharpening. In comparison, the usage of space closer to the cone fundamentals such as the HPE or Stockman–Sharpe provides improved predictions of the perceptual attribute. Blue constancy, a significant shortcoming in CIELAB, is considerably improved using a space closer to the cone fundamentals.9

As discussed in the previous section, the proposed method incorporates Eq. (4), a tone-mapping function, and the cone-response function to correct the color. In this section, we discuss the difference between the proposed method and some existing methods with respect to visual quality. To demonstrate the feasibility of correcting the color, the results using a number of well-known images are presented.

These images are available at the links.21,22 We also used the images captured with five standard illuminations (D, CWF, TL84, A, and UV) and stored in tiff format. The images from Figs. 234 present the resulting image for HDR images after performing color correction using the techniques of iCAM06,9 the technique proposed in Ref. 1, and the proposed work. For each, (a) is the original image; (b) shows the resulting image after processing with iCAM06. The MATLAB codes of iCAM06 are publicly available23 and the resulting images are obtained by using the parameters described in Ref. 9. The halos appear in the resulting image as the image adapts to the piecewise bilateral filter. Moreover, a red cast has been added to the entire resulting image because of the luminance factor in the cone-response function of the chromatic-adaptation transform. In the result images of (c), as proposed in Ref. 1, halos are considerably reduced in the entire resulting image compared to (b). A dominating color appears in the entire resulting image. Because it has a curve of a nonlinear characteristic in part of the tone-mapping section, as shown in Ref. 1, the resulting image has a blue cast. Moreover, both (b) and (c) have a veiling glare similar to lamp lights despite the use of a nonlinear property and both chromatic and achromatic colors. The parameter (α) in Eq. (3) is set through an empirical test. The parameter is increased from 0 to 1. When the parameter is set as α=0.2, the result has best performance in the given images and the resultant image is improved without color distortion; that is, the halo and dominating color problems are substantially reduced when compared with both (b) and (c), as shown in the resulting images (Figs. 234). The veiling glare is also markedly reduced. Figure 5 illustrates why the resulting image of the previous one appeared to add a blue cast compared to Figs. 4(c) and Fig. 4(d). It is biased in the direction of the blue color as shown in the result compared to the proposed work.

Graphic Jump Location
Fig. 2
F2 :

“Clockbui” image with (a) original image, (b) iCAM06, (c) previous technique,1and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 3
F3 :

“Wreathbu” image with (a) original image, (b) iCAM06, (c) previous technique,1 and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 4
F4 :

“Rend02_oC95” image with (a) original image, (b) iCAM06, (c) previous technique,1 and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 5
F5 :

Gamut area for Fig. 3 with (a) previous technique,1 and (b) proposed work.

Five images with different illuminations (D, CWF, TL84, A, and UV) are used to evaluate the effect of the illumination condition. Figure 6 shows the resulting images using the five images with different illuminations, and the standard illumination tool box is used to obtain these images but not outside. Figure 7 presents the luminance surround factor distribution of these images in Eq. (11). Both axes are represented as a pixel number of the input image and the result of the luminance surround factor, respectively, as in Eq. (11). The value in the iCAM is defined with F=1 for an average surround, or F=0.8 for dim- and dark-surrounds, whereas the results in the proposed method [Eq. (11)] are of the S-shape curve compared with that of iCAM06. A wide variety of scenes in the given image is addressed using the proposed surround luminance factor, where the input information is the luminance level of the pixels in the given image. The maximum values are (a) 0.0971 for “A” illumination, (b) 0.0959 for that of “CWF” illumination, (c) 0.1010 for “D65” illumination, (d) 0.881 for “TL84” illumination, and (e) 0.0804 for “UV” illumination. Figure 8 is a quantitative evaluation using a gamut area [ICC3D version 1.2.9, copyright (c) 2002-2003 Gjovik University College]. Figure 8(a) is the gamut area for “A” standard illumination, (b) “CWF” illumination, (c) “D65” illumination, (d) “TL84” illumination, and (e) “UV” illumination, and Symbol a+(or b+) and a− (or b−) are a*(or b*) positive and negative values of the CIELAB, respectively. The colors are distributed similarly in the gamut area. Therefore, the results indicate that the proposed method can affect the performance regardless of the illumination condition.

Graphic Jump Location
Fig. 6
F6 :

Five standard illumination images with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Graphic Jump Location
Fig. 7
F7 :

Distribution of the luminance factor with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Graphic Jump Location
Fig. 8
F8 :

Gamut area for Fig. 4 with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Figure 9 shows the resulting image according to the parameters for Eq. (3). Figure 9(a) is the original image and it is captured under a “D65” standard illumination atmosphere. Figure 9(b) is the resulting image for parameter =0.1, Fig. 9(c) is the resulting image when setting parameter =0.2, Fig. 9(d) is that for =0.3, Fig. 9(e) is =0.4, and Fig. 9(f) is =0.5, respectively. The color changes take place in the entire resulting image when setting the parameter over =0.3. Figures 9(b) and 9(c) show a better performance than the other images. For selection of the best resulting image according to the parameter, 15 observers participated in the test. Most of the observers selected Fig. 9(c), =0.2, and that is the reason it is set as the parameter value in Figs. 234.

Graphic Jump Location
Fig. 9
F9 :

The resulting images for the captured image under D65 standard illumination according to the parameter in Eq. (3), with (a) original image, (b) resulting image for =0.1, (c) resulting image for =0.2, (d) resulting image for =0.3, (e) resulting image for =0.4, and (f) resulting image for =0.5.

The color difference (Δuv) between the captured image under standard illumination and the color reproduction image is introduced to evaluate the color reproduction performance of the proposed method using CIE 1976 uv color space,24 which is widely used for color reproduction evaluations.

Figure 10 and Table 2 display the results obtained from mean color difference for the iCAM06, the previous technique,1 and the proposed method. In the resulting diagram, the indicators of the proposed method are of a lower mean color difference25 compared with the conventional method. Based on the captured image under D65 standard illumination, the maximum difference between the original image and rendered image is 0.0192. The human eye perceives a color difference when the difference of the two separated color patches is Δuv0.04.26,27 Therefore, this indicates an excellent performance for the experiment because the color difference is smaller than the visual minimum perceptible color difference. Figure 11 presents the resulting images of the proposed method. In each of image, (a) is the original image, (b) is obtained by using iCAM06, (c) is that of the previous one, and (d) is the resulting image of the proposed method. The results obtained were similar to those for the above-mentioned approaches.

Table Grahic Jump Location
Table 2Mean color difference for the captured images under five difference standard illuminations.
Graphic Jump Location
Fig. 10
F10 :

Mean color difference (Δuv) for the captured images under five different standard illuminations.

Graphic Jump Location
Fig. 11
F11 :

Experimental resulting images using the proposed method: (a) the original image, (b) using iCAM06, (c) that of the previous one, and (d) the proposed method.

To conduct a subjective evaluation, a psychophysical experiment was performed. A total of 15 observers with normal color vision participated in the test, and five different illumination images were used to access the color rendering algorithm that considered the color reproduction, brightness, and colorfulness. These images were captured under five different illumination conditions, as shown in Fig. 6. For the psychophysical experiment, a pair comparison method was used. The iCAM06, the previous one, and the proposed method were compared. Because HVS has a veiling glare limit in a white background, such as a lighted room, the psychophysical experiment was conducted in a dark room. The parameter used for each algorithm is fixed based on the value suggested in the research paper. Each observer judged a pair of images and assigned a “1” to the selected image and a “0” to the rejected image. In the case of a draw, “0.5” was assigned to each image. The scores were then added and transformed to the preference scores.28Figure 12 and Table 3 show the result of the psychophysical experiment for the captured images under five different standard illuminations. The preference scores in the proposed method are higher than those of the conventional methods. Figure 13 and Table 4 show the result of the preference score for the 30 different images. The result is similar to that of Fig. 12 and Table 3.

Graphic Jump Location
Fig. 12
F12 :

Preference scores for the captured images under five different standard illuminations.

Table Grahic Jump Location
Table 3Preference scores for the captured images under five different standard illuminations.
Graphic Jump Location
Fig. 13
F13 :

Preference scores for 30 different images.

Table Grahic Jump Location
Table 4Preference scores for 30 different images.

In the natural world, eyes are confronted with a broad range of luminance. Eye adaptation is the mechanism that allows our eyes, by changing their sensitivity, to be responsive at varying illumination levels. HDR imaging is a technique that mimics the human eyes and readily captures a range of light intensities in multiexposure time. The HDR imaging technique, however, must address the issue of displaying HDR images on LDR display devices. Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of the brightness, contrast, and color of HDR images on LDR display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the HVS. The mismatches relate particularly to color as described in many of the research articles. Some of the conventional methods also include issues, such as halo artifacts, graying-out, and dominating color.

In this paper, a color-rendering method, comprised of tone-mapping and a chromatic-adaptation method based on an XYZ tristimulus color space, was proposed. The method addressed the existing problems in the conventional methods. The tone-mapping operator produced visibility and an overall impression of brightness, contrast, and color in the given HDR images when mapped onto LDR display devices. The tone-mapping resultant image was obtained based on chromatic and achromatic colors (absolute luminance Y component). The resulting image, thereafter, was processed with the cone-response function wherein emphasis was placed on the HVP that addressed the mismatch between the original scene and the rendered image based on HVS. The luminance factor was used to manage a diverse variety of actual scenes instead of the average surround in iCAM06.

In the experiment of the gamut and color difference using the CIE xy color space, over 30 images and five different illumination images were used for evaluation. The results for the proposed method demonstrated that the dominated color was remarkably reduced throughout the image compared to the conventional method, regardless of the illumination. The color difference equation is presented to conduct quantitative evaluation for color reproduction in the given image with five different standard illuminations. In the evaluation of the color difference, most of resulting images in the proposed method are of lower values compared with the conventional method, except for that of “CWF” standard illumination image. In order to conduct a subjective evaluation, the psychophysical experiment is performed using five different standard illuminations and 30 different images. As a result, the preference scores of the proposed method are higher than that of the other methods. The proposed method shows a better performance than others through several evaluation techniques, such as color difference equation, psychophysical experiment, and so on. However, the proposed method will somewhat improve the performance in that the goal of tone mapping is to produce visibility and an overall impression of brightness, contrast, and color in a given HDR image when mapped onto relatively LDR displays or printers based on HVP. In the near future, we will continue to investigate these problems.

This research was financially supported by the “Over regional linked 3D convergence industry promotion program” through the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT).

Yun  B. J.  et al., “Tone-mapping and dynamic range compression using dynamic cone response,” Opt. Rev.. 20, (6 ), 513 –520 (2013). 1340-6000 CrossRef
Ward  G., High Dynamic Range Imaging. ,  Elsevier ,  Amsterdam  (2005).
Devlin  K., “A review of tone reproduction techniques,” Technical Report CSTR-02-005, Computer Science (2002).
Reinhard  E.  et al., High Dynamic Range Imaging: Acquisition Display and Image-based Lighting. ,  Morgan Kaufmann  (2005).
DiCarlo  J. M., and Wandell  B. A., “Rendering high dynamic range images,” Proc. SPIE. 3965, , 392–401 (2000).CrossRef
Kotera  H., and Fujita  M., “Appearance improvement of color image by adaptive scale-gain retinex model,” in  Proc. IS&SID 10th Color Imaging Conference , pp. 166 –171,  IS&T ,  Springfield, Virginia  (2002).
Wang  L., , Horiuchi  T., and Kotera  H., “High dynamic range image compression by fast integrated surround retinex model,” J. Imaging Sci. Technol.. 51, (1 ), 34 –43 (2007). 1062-3701 CrossRef
Wang  L.  et al., “HDR image compress and evaluation based on local adaptation retinal model,” J. Soc. Inf. Disp.. 15, (9 ), 731 –739 (2007). 0734-1768 CrossRef
Kuang  J., , Johnson  G. M., and Fairchild  M. D., “iCAM06: A refined image appearance model for HDR image rendering,” J. Visual Commun. Image Represent.. 18, (5 ), 406 –414 (2007). 1047-3203 CrossRef
Hunt  R. W. G., The Reproduction of Color. , 6th ed.,  Wiley ,  New York  (2004).
Susstrunk  S., , Holm  J., and Finlayson  G. D., “Chromatic adaptation performance of different RGB sensors,” Proc. SPIE. 4300, , 172 –183 (2001).CrossRef
Fairchild  M. D., Color Appearance Model. , 2nd ed.,  Wiley ,  New York  (2005).
Barnard  K., and Funt  B., “Investigations into multi-scale retinex,” in Colour Imaging Vision and Technology. , pp. 9 –17 (1999).
Ebner  M., Color Constancy. ,  Wiley ,  New York  (2007).
von Kries  J., Chromatic Adaptation. ,  Festchrift der Albrecht-Ludwings-Universitat  (1902).
Barnard  K. K., , Ciurea  F., and Funt  B., “Spectral sharpening for computational color constancy,” J. Opt. Soc. Am. A. 18, , 2728 –2743 (2001).CrossRef
Yun  B. J.  et al., “Color correction for high dynamic range images using a chromatic adaptation method,” Opt. Rev.. 20, (1 ), 65 –73 (2013). 1340-6000 CrossRef
Jang  I. S., , Park  K. H., and Ha  Y. H., “Color correction by estimation of dominant chromaticity in multi-scaled retinex,” J. Imaging Sci. Technol.. 53, (5 ), 50502  (2009). 1062-3701 CrossRef
Mantiuk  R.  et al., “Color correction for tone mapping,” in Eurograpics. 28, (2 ) (2009).
Kang  H. R., Computational Color Technology. ,  SPIE Press ,  Bellingham, Washington  (2006).
Ohta  N., and Robertson  A. R., Colorimetry: Fundamentals and Applications. ,  Wiley ,  New York  (2005).
Chae  S. M.  et al., “A tone compression model for the compensation of white point shift generated from HDR rendering,” IEICE Trans. Fundamentals. 95, (8 ), 1297 –1301 (2012). 0916-8508 CrossRef
Hunt  R. W. G., The Reproduction of Colour in Photography. ,  Printing &Television, Fountain Press ,  England  (1987).
VESA Display Metology Committee, Flat Panel Display Measurements Standard. ,  Vesa , (2001).
Morovic  J., Color Gamut Mapping. ,  Wiley ,  New York  (2008).

Ho-Hyoung Choi received his BS and MS degrees in computer and communication engineering, and computer and electronics, from Gyeongju University in 1997 and 2003, respectively, and his PhD in mobile communication engineering from Kyungpook National University in 2012. His current research interests include image processing, color rendering, color vision, computer vision, and SVM.

Gi-Seok Kim received BS and MS degrees in electronics engineering from Kyungpook National University, Daegu, Korea, in 1992 and 1994, respectively, and his PhD in electronic engineering from Kyungpook National University. Since 1997, he has been with the School of Electrical Energy and Computer Engineering, Gyeongju University, where he is a professor. His current research interests include image processing, image segmentation, and image enhancement.

Byoung-Ju Yun received a BS degree in electronics engineering from Kyungpook National University, Daegu, Korea, in 1993, and MS and PhD degrees in electrical engineering and computer science from Korea Advanced Institute of Science and Technology, Daejeon, Korea, 1996 and 2002, respectively. Since 2003, he has been with the School of Electronics Engineering, College of IT Engineering, Kyungpook National University, where he is a professor. His current research interests include MPEG-4, color correction, SVC, and HCI.

© The Authors. Published by SPIE under a Creative CommonsAttribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requiresfull attribution of the original publication, including its DOI.

Citation

Ho-Hyoung Choi ; Gi-Seok Kim and Byoung-Ju Yun
"Modeling a color-rendering operator for high dynamic range images using a cone-response function", J. Electron. Imaging. 24(5), 053005 (Sep 10, 2015). ; http://dx.doi.org/10.1117/1.JEI.24.5.053005


Figures

Graphic Jump Location
Fig. 1
F1 :

Curve of characteristic based on Eq. (4) with (a) X value of tristimulus values, (b) Y value of tristimulus value, and (c) Z value of tristimulus values.

Graphic Jump Location
Fig. 2
F2 :

“Clockbui” image with (a) original image, (b) iCAM06, (c) previous technique,1and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 3
F3 :

“Wreathbu” image with (a) original image, (b) iCAM06, (c) previous technique,1 and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 4
F4 :

“Rend02_oC95” image with (a) original image, (b) iCAM06, (c) previous technique,1 and (d) proposed method (α=0.2).

Graphic Jump Location
Fig. 5
F5 :

Gamut area for Fig. 3 with (a) previous technique,1 and (b) proposed work.

Graphic Jump Location
Fig. 6
F6 :

Five standard illumination images with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Graphic Jump Location
Fig. 7
F7 :

Distribution of the luminance factor with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Graphic Jump Location
Fig. 8
F8 :

Gamut area for Fig. 4 with (a) A illumination, (b) CWF illumination, (c) D65 illumination, (d) TL84 illumination, and (e) UV illumination.

Graphic Jump Location
Fig. 9
F9 :

The resulting images for the captured image under D65 standard illumination according to the parameter in Eq. (3), with (a) original image, (b) resulting image for =0.1, (c) resulting image for =0.2, (d) resulting image for =0.3, (e) resulting image for =0.4, and (f) resulting image for =0.5.

Graphic Jump Location
Fig. 10
F10 :

Mean color difference (Δuv) for the captured images under five different standard illuminations.

Graphic Jump Location
Fig. 11
F11 :

Experimental resulting images using the proposed method: (a) the original image, (b) using iCAM06, (c) that of the previous one, and (d) the proposed method.

Graphic Jump Location
Fig. 12
F12 :

Preference scores for the captured images under five different standard illuminations.

Graphic Jump Location
Fig. 13
F13 :

Preference scores for 30 different images.

Tables

Table Grahic Jump Location
Table 1Examples of the relative tristimulus values of the white points and their associated color temperatures T.9
Table Grahic Jump Location
Table 2Mean color difference for the captured images under five difference standard illuminations.
Table Grahic Jump Location
Table 3Preference scores for the captured images under five different standard illuminations.
Table Grahic Jump Location
Table 4Preference scores for 30 different images.

References

Yun  B. J.  et al., “Tone-mapping and dynamic range compression using dynamic cone response,” Opt. Rev.. 20, (6 ), 513 –520 (2013). 1340-6000 CrossRef
Ward  G., High Dynamic Range Imaging. ,  Elsevier ,  Amsterdam  (2005).
Devlin  K., “A review of tone reproduction techniques,” Technical Report CSTR-02-005, Computer Science (2002).
Reinhard  E.  et al., High Dynamic Range Imaging: Acquisition Display and Image-based Lighting. ,  Morgan Kaufmann  (2005).
DiCarlo  J. M., and Wandell  B. A., “Rendering high dynamic range images,” Proc. SPIE. 3965, , 392–401 (2000).CrossRef
Kotera  H., and Fujita  M., “Appearance improvement of color image by adaptive scale-gain retinex model,” in  Proc. IS&SID 10th Color Imaging Conference , pp. 166 –171,  IS&T ,  Springfield, Virginia  (2002).
Wang  L., , Horiuchi  T., and Kotera  H., “High dynamic range image compression by fast integrated surround retinex model,” J. Imaging Sci. Technol.. 51, (1 ), 34 –43 (2007). 1062-3701 CrossRef
Wang  L.  et al., “HDR image compress and evaluation based on local adaptation retinal model,” J. Soc. Inf. Disp.. 15, (9 ), 731 –739 (2007). 0734-1768 CrossRef
Kuang  J., , Johnson  G. M., and Fairchild  M. D., “iCAM06: A refined image appearance model for HDR image rendering,” J. Visual Commun. Image Represent.. 18, (5 ), 406 –414 (2007). 1047-3203 CrossRef
Hunt  R. W. G., The Reproduction of Color. , 6th ed.,  Wiley ,  New York  (2004).
Susstrunk  S., , Holm  J., and Finlayson  G. D., “Chromatic adaptation performance of different RGB sensors,” Proc. SPIE. 4300, , 172 –183 (2001).CrossRef
Fairchild  M. D., Color Appearance Model. , 2nd ed.,  Wiley ,  New York  (2005).
Barnard  K., and Funt  B., “Investigations into multi-scale retinex,” in Colour Imaging Vision and Technology. , pp. 9 –17 (1999).
Ebner  M., Color Constancy. ,  Wiley ,  New York  (2007).
von Kries  J., Chromatic Adaptation. ,  Festchrift der Albrecht-Ludwings-Universitat  (1902).
Barnard  K. K., , Ciurea  F., and Funt  B., “Spectral sharpening for computational color constancy,” J. Opt. Soc. Am. A. 18, , 2728 –2743 (2001).CrossRef
Yun  B. J.  et al., “Color correction for high dynamic range images using a chromatic adaptation method,” Opt. Rev.. 20, (1 ), 65 –73 (2013). 1340-6000 CrossRef
Jang  I. S., , Park  K. H., and Ha  Y. H., “Color correction by estimation of dominant chromaticity in multi-scaled retinex,” J. Imaging Sci. Technol.. 53, (5 ), 50502  (2009). 1062-3701 CrossRef
Mantiuk  R.  et al., “Color correction for tone mapping,” in Eurograpics. 28, (2 ) (2009).
Kang  H. R., Computational Color Technology. ,  SPIE Press ,  Bellingham, Washington  (2006).
Ohta  N., and Robertson  A. R., Colorimetry: Fundamentals and Applications. ,  Wiley ,  New York  (2005).
Chae  S. M.  et al., “A tone compression model for the compensation of white point shift generated from HDR rendering,” IEICE Trans. Fundamentals. 95, (8 ), 1297 –1301 (2012). 0916-8508 CrossRef
Hunt  R. W. G., The Reproduction of Colour in Photography. ,  Printing &Television, Fountain Press ,  England  (1987).
VESA Display Metology Committee, Flat Panel Display Measurements Standard. ,  Vesa , (2001).
Morovic  J., Color Gamut Mapping. ,  Wiley ,  New York  (2008).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.