Regular Articles

Improved structural similarity metric for the visible quality measurement of images

[+] Author Affiliations
Daeho Lee

Kyung Hee University, Humanitas College, 1732, Deogyeong-daero, Giheung-gu, Yongin 17104, Republic of Korea

Sungsoo Lim

Kyung Hee University, Department of Electronics and Radio Engineering, 1732, Deogyeong-daero, Giheung-gu, Yongin 17104, Republic of Korea

J. Electron. Imaging. 25(6), 063015 (Dec 07, 2016). doi:10.1117/1.JEI.25.6.063015
History: Received April 23, 2016; Accepted November 11, 2016
Text Size: A A A

Open Access Open Access

Abstract.  The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

Figures in this Article

It is crucial to assess objectively image qualities for image processing applications because the assessments can compare with results of other methods to evaluate the performance. For measuring the performance of image correction, compressing and enhancing methods, such as denoising, JPEG compression, super-resolution, and frame rate upconversion,17 and almost all objective evaluation metrics do not completely agree with the perceived subjective visibility of humans, while subjective evaluation is usually too inconvenient, time-consuming, and expensive.8

The simplest and most widely used metrics are mean squared error (MSE) and peak signal-to-noise ratio (PSNR); MSE is computed by averaging the squared differences of two signals, and PSNR is the ratio between the maximum value (Max) of a signal and the MSE as follows: Display Formula

MSE=1Mi=1M(xiyi)2,(1)
Display Formula
PSNR=10log10(Max2MSE),(2)
where xi and yi are the element of two signals, and M is the number of elements, e.g., the elements in image signals indicate pixels and the number of pixels should be equal to M. However, the MSE and PSNR are not very well matched to perceived visible quality.913 A lot of image quality assessment methods based on error sensitivity have been proposed,1419 and they use the human visual system (HVS), contrast sensitivity function, discrete cosine transform, wavelet transform, and so forth. However, the similarity errors assessed by them may quite differ with the loss of qualities, so some distortions may be clearly visible but these errors are not clearly observed in them.8

Recently, structural similarity (SSIM) has typically been used to determine visible quality.8,20 This is a full reference image quality assessment method and it indicates how much an image is similar to the original image. It has three main components, which are structure, illuminance, and contrast. However, the components, especially structure component, are highly sensitive to translation, scaling, and rotation of an image. This means that although when images are translated and rotated as little as an unrecognizable amount, the SSIM is sensitively decreased.21 Moreover, it may overestimate images that have undergone regional distortions such as JPEG compression.

In this paper, we aim at developing an improved structural similarity metric to outperform the typical SSIM, which can be used to overcome potential drawbacks. The proposed metric uses an improved structure comparison, and additionally uses a sharpness comparison.

Since humans usually use contrast, color, and frequency changes in their image quality measures,22 the SSIM uses the luminance, contrast, and structure comparison shown in Fig. 1.8,22 The SSIM of two images x and y is defined by the combination f() of three components as follows:8Display Formula

SSIM(x,y)=f[l(x,y),c(x,y),s(x,y)],(3)
where l, c, and s are the luminance, contrast, and structure comparison functions, respectively, defined by Display Formula
l(x,y)=2μxμy+C1μx2+μy2+C1,(4)
Display Formula
c(x,y)=2σxσy+C2σx2+σy2+C2,(5)
Display Formula
s(x,y)=σxy+C3σxσy+C3,(6)
where μx and σx denote the mean and the standard deviation of x; μy and σy denote the mean and the standard deviation of y; σxy denotes the covariance between x and y; and C1, C2, and C3 are constants used to avoid instability when the denominators are very close to zero. The values of l, c, and s are in [0, 1] and they indicate higher similarities for each comparison function when the values are close to 1. The local statistics are calculated within the local window having circular symmetric Gaussian weights, which are w={wi|i=1,2,,N} and i=1Nwi=1 as follows: Display Formula
μx=i=1Nwixi,(7)
Display Formula
σx=[i=1Nwi(xiμx)2]1/2,(8)
Display Formula
σxy=i=1Nwi(xiμx)(yiμy),(9)
where i is an index of the pixels in the Gaussian window and N is the total pixel number of the Gaussian window.

Graphic Jump Location
Fig. 1
F1 :

Diagram of the SSIM measurement system.

The combination of all comparisons between two images x and y is Display Formula

SSIM(x,y)=[l(x,y)]α·[c(x,y)]β·[s(x,y)]γ,(10)
where α>0, β>0, and γ>0 are parameters used to adjust the relative importance. In order to simplify the expression and equalize the relative importance of the three components, they are generally set α=β=γ=1 and C3=C2/2, so we also set the parameters in the same manner.8,21 The results in a specific form of the SSIM index as follows: Display Formula
SSIM(x,y)=(2μxμy+C1)(2σxy+C2)(μx2+μy2+C1)(σx2+σy2+C2).(11)
To measure a single overall quality measure of the entire image, a mean SSIM (MSSIM) index is used as follows: Display Formula
MSSIM(X,Y)=1Mi=1MSSIM(xi,yi),(12)
where X and Y are the original and the distorted images, respectively, and M is the number of pixels of images as used in Eq. (1).8 MSSIM can be interpreted as a mean value of the SSIM index map.23 Because SSIM values have the range of [0, 1], MSSIM also has the same range.

The SSIM and MSSIM can be used to measure the similarity of two images. However, they have some drawbacks as shown in Fig. 2 and Table 2. First, images filtered by a low pass filter, such as a mean filter (MF), a median filter (MedF), and JPEG compression, are evaluated as having high similarity scores. Second, images that have been slightly distorted by some geometric transformations, such as spatial translation (ST) and rotation (RT), are evaluated as having low similarity scores.

Graphic Jump Location
Fig. 2
F2 :

Comparison of image similarity.

The main component of the SSIM that causes drawbacks is the structure comparison defined by Eq. (6). When we use Eq. (3) by only combining Eqs. (4) and (5), images that are slightly geometrically transformed do not have low similarities as shown in Fig. 3 and Table 1, where l¯(x,y), c(x,y), and s(x,y) are the mean of l(x,y) in Eq. (4), c(x,y) in Eq. (5), and s(x,y) in Eq. (6). In Table 1, s(x,y) of the ST image is very low, while s(x,y) of the JPEG image is higher than that of the ST image. This example shows that the limitation of SSIM is sensitive to ST, scaling, and RT.

Graphic Jump Location
Fig. 3
F3 :

Comparison of the original, ST, and JPEG compression image.

Table Grahic Jump Location
Table 1Comparison of MSSIM and its components with MSSIM-S and its components about Fig. 3.

To reduce the weak effect of s(x,y), we define the structure comparison in a new way as follows: Display Formula

s˜(x,y)=(2σxσy+C2)(2σx+σy++C2)(σx2+σy2+C2)(σx+2+σy+2+C2),(13)
where σx and σx+ denote the standard deviations for elements of x smaller than and larger than μx, respectively, and σy and σy+ denote the same for y. In Ref. 8, structural information in an image is defined as those attributes that represent the structure of objects in the scene, independent of the average luminance and contrast, and structure comparison is conducted after luminance subtraction and variance normalization. So s(x,y) is defined by the correlation between standard scores (z-score),24(xμx)/σx and (yμy)/σy. However, we define s˜(x,y) as the correlation between standard deviations for pixels having positive/negative standard scores because σx and σx+ can represent the structure of objects by dividing as locally brighter and darker regions. As shown in Fig. 3 and Table 1, the weak effect of s(x,y) is relatively decreased compared to the original SSIM; however, the similarity of the ST image is lower than that of the JPEG image. That is to say, the SSIM still overestimates blurred images, when s˜ is used as the structure comparison. Therefore, we add a new component, the sharpness comparison h(x,y), which is the correlation between the normalized digital Laplacian, defined as Display Formula
h(x,y)=2|2x||2y|+C2|2x|2+|2y|2+C2,(14)
where 2x and 2y denote the normalized digital Laplacian given by Display Formula
2x=xμx.(15)

The new similarity components s(x,y) and h(x,y) are satisfied with the properties for measurement metrics as follows:

  1. Symmetry: S(x,y)=S(y,x);
  2. Boundedness: S(x,y)1;
  3. Unique maximum: S(x,y)=1, if and only if x=y.

As shown in Fig. 4, the mean of h(x,y) of the ST image is higher than that of the JPEG image. Finally, the improved SSIM which includes the sharpness comparison (ISSIM-S) can be defined as Display Formula

ISSIM-S=l(x,y)·c(x,y)·s˜(x,y)·h(x,y),(16)
and the proposed ISSIM-S measurement system can be configured (Fig. 4).

Graphic Jump Location
Fig. 4
F4 :

Diagram of the proposed ISSIM-S measurement system.

To measure a single overall quality measure of the entire image, a mean ISSIM-s (MISSIM-S) index may be used as follows: Display Formula

MISSIM-S(X,Y)=1Mi=1MISSIM-S(xi,yi).(17)
The values of ISSIM-S and MISSIM-S are also in [0, 1] and these values indicate higher similarities when they are close to 1.

To evaluate the proposed similarity metric, which compares the PSNR and the SSIM, we tested some distorted images as shown in Fig. 2. In this test, we used an 11×11 circular-symmetric Gaussian weight function, with a standard deviation of 1.5; normalized the unit sum equals to 1. The constants were selected to be C1=(0.01·255)2, C2=(0.03·255)2, and C3=C2/2 as was done in Ref. 8. These values seem somewhat arbitrary, but Wang et al. found that in their experiments, the performance of the SSIM index algorithm is fairly insensitive to variations of these values.

The local variance similarity between the original and the histogram-equalized images is quite different because histogram equalization (HE) is a nonlinear intensity transform. However, the SSIM is evaluated to have a high similarity score, while our new metric is evaluated as having a lower similarity than the SSIM. The ISSIM-Ss of the images, filtered by low pass filters, such as MF, MedF, and JPEG compression, are also evaluated to have lower similarities than the SSIM. In addition, the ISSIM-Ss of images that have been slightly geometrically transformed by ST and RT are higher than SSIMs. The results of the mean luminance shifting (MLS) and impulsive noise (IN) images show that the SSIMs and the ISSIM-Ss are evaluated with the same image but the result values are different.

To compare the different index maps of the SSIM and the ISSIM-S, the results of HE, MedF, JPEG, and MF are shown in Fig. 5. The pixel values of the index map are normalized SSIM or ISSIM-S values. The index maps have different results, and the index maps of the ISSIM-S are darker than those of the SSIM because the MISSIM-Ss are lower than the MSSIMs. While the index maps of the ISSIM-S for IN, ST, and RT are brighter than those of the SSIM, because the similarities of the ISSIM-S are increased than those of the SSIM as shown in Fig. 6. The index maps of MLS are very similar as shown in Fig. 7.

Graphic Jump Location
Fig. 5
F5 :

Comparison of image similarity (from left to right: the evaluating images of Fig. 2, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 6
F6 :

Comparison of image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 7
F7 :

Comparison of image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

To compare the mean opinion scores (MOSs), the rank of PSNR, mean of the SSIM, mean of the ISSIM-S, and MOS are shown in Table 2. To measure MOSs, we showed subjects the result images of each processing with the original image, and received their opinion scores, which have ranges of 1 (not similar) to 5 (very similar). Each comparison was implemented one-on-one with the original image and we randomized the order of the distorted images we showed to minimize order effects. The number of test subjects was 17 and none of them had any problems with their eyes. The experiments were implemented under the regulated illumination conditions and display conditions.

Table Grahic Jump Location
Table 2Comparison of the PSNR, mean of the SSIM, mean of the ISSIM-S, and MOS rank of “Lena” image (the rank for each metric is shown in parentheses).

The scores themselves are subjective and not convincing but they can have meaning in relative comparison. Therefore, we used MOS ranks instead of MOS itself. The rank correlations by the MOS rank are also shown, where the rank correlation is computed by Spearman’s rank correlation coefficient (ρ)25 which is defined as follows: Display Formula

ρ=16di2n(n21),(18)
where di denotes the difference of the i’th rank and n denotes the ranking size. The rank correlation of the mean of the ISSIM-S is closer to 1 than the others.

We compared PSNR, SSIM, ISSIM-S, and MOS with another image shown in Fig. 8 and the results are shown in Table 3. The types of distortion are exactly the same as those of Table 2, but the only difference is the filter size. The resolution of test images in Table 2 is 256×256 and the filter size is 11×11; however, the resolution of test images in Fig. 8 is 128×128 so we set the filter size as 5×5.

Graphic Jump Location
Fig. 8
F8 :

Comparison of “Einstein” image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

Table Grahic Jump Location
Table 3Comparison of the PSNR, mean of the SSIM, mean of the ISSIM-S, and MOS rank of “Einstein” image (the rank for each metric is shown in parentheses).

To evaluate the performance with different distortion levels, we tested a few more images: blurred images with different sizes of MF, images that have undergone various loss via JPEG compression, and images differently translated by ST (shown in Fig. 9 and Table 4). As the distortion level increases, PSNR, MSSIM, and mean ISSIM-S decrease, no matter the processing type. However, in ST, PSNR and MSSIM have the lowest values when it is translated only 3 pixels according to y axis, while mean ISSIM-S does not. ISSIM-S is also affected by translation but it is less sensitive than PSNR and SSIM methods.

Graphic Jump Location
Fig. 9
F9 :

Comparison of image similarity for different distortion levels (the numerics in parentheses indicate filter sizes of MF, quality factors of JPEG compression, and pixel amounts of ST).

Table Grahic Jump Location
Table 4Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for different distortion levels.

We conducted two additional experiments. First, comparison of ST, MF, and JPEG compression for various scene contents are shown Fig. 10 and Table 5. The resolutions of the tested images in this experiment are 256×256. The PSNR and the mean of SSIM values for each image are scored according to this order, ST<MF<JPEG. However, the mean of ISSIM-S shows another pattern, which is MF<JPEG<ST. The order of ISSIM-S is more reasonable than PSNR or SSIM. This result shows that the proposed image quality assessment method does not overestimate blurred images and it is much less sensitive to geometric transformations, which were one of the identified drawbacks of SSIM. Second, as shown in Fig. 11 and Table 6, we compared the PSNR, the mean of SSIM, and the mean of ISSIM-S for various combinations of degradations. The drawback of SSIM is that it is too sensitive to geometric translation and can be found when the degradations are combined. This result shows that MSSIM overvalues HE+IN while MISSIM-S evaluates moderately. It means that MISSIM-S is much closer to HVS because MISSIM-S is less sensitive to a small amount of geometric translation just as HVS is.

Graphic Jump Location
Fig. 10
F10 :

Comparison of image similarity for various scene contents.

Table Grahic Jump Location
Table 5Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for different scene contents.
Graphic Jump Location
Fig. 11
F11 :

Comparison of image similarity for various combinations of degradations.

Table Grahic Jump Location
Table 6Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for various combinations of degradations.

In addition, we tested the variations of MSSIM and MISSIM-S in terms of the size of the Gaussian window as shown in Fig. 12, where the 11×11 window size is large enough because the variations are very small when the window size is larger than 11.

Graphic Jump Location
Fig. 12
F12 :

Variations of MSSIM and MISSIM-S in terms of the size of the Gaussian window.

In this paper, we have proposed an improved structural similarity metric using structure and sharpness comparison functions to overcome the drawbacks of the SSIM metric. The structure comparison used segmented standard deviations by the mean, and sharpness comparison used the normalized digital Laplacian. The proposed metric can evaluate geometric transformed images with high similarities and cannot overestimate blurred images such as JPEG compression. The experimental results indicate that our similarity metric is superior to existing methods in respect to the perceived visibility of humans. Therefore, our method can be used to evaluate the performance of various methods such as image enhancement, frame rate upconversion, image compression, super-resolution, and image restoration.

This research was partly supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2015R1D1A1A01059091), and Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. B0101-16-0033, Research and Development of 5G Mobile Communications Technologies using CCN-based Multi-dimensional Scalability).

Kim  U. S., and Sunwoo  M. H., “New frame rate up-conversion algorithm with low computational complexity,” IEEE Trans. Circuits Syst. Video Technol.. 24, (3 ), 384 –393 (2014).CrossRef
Babu  R. V., , Suresh  S., and Pekis  A., “No-reference JPEG-image assessment using GAP_RBF,” Signal Process.. 87, (6 ), 1493 –1503 (2007).CrossRef
Shnaydeman  A., , Gusev  A., and Eskicioglu  A. M., “An SVD-based grayscale image quality measure for local and global assessment,” IEEE Trans. Image Process.. 15, (2 ), 422 –429 (2006). 1057-7149 CrossRef
Freeman  W. T., , Jones  T. R., and Pasztor  E. C., “Example-based super-resolution,” IEEE Comput. Graph. Appl.. 22, (2 ), 56 –65 (2003). 0272-1716 CrossRef
Schultz  R. R., , Meng  L., and Stevenson  R. L., “Subpixel motion estimation for super-resolution image sequence enhancement,” J. Visual Commun. Image Represent.. 9, (1 ), 38 –50 (1998). 1047-3203 CrossRef
Chang  S. G., , Yu  B., and Vetterli  M., “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Trans. Image Process.. 9, (9 ), 1522 –1531 (2000). 1057-7149 CrossRef
Buades  A., , Coll  B., and Morel  J. M., “A non-local algorithm for image denoising,” in  Proc. IEEE CS Conf. Computer Vision and Pattern Recognition , pp. 60 –65 (2005).CrossRef
Wang  Z.  et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process.. 13, (4 ), 600 –612 (2004). 1057-7149 CrossRef
Eckert  M. P., and Bradley  A. P., “Perceptual quality metrics applied to still image compression,” Signal Process.. 70, (3 ), 177 –200 (1998).CrossRef
Eskicioglu  A. M., and Fisher  P. S., “Image quality measures and their performance,” IEEE Trans. Commun.. 43, (12 ), 2959 –2965 (1995).CrossRef
Winkler  S., “A perceptual distortion metric for digital color video,” Proc. SPIE. 3644, , 175 –184 (1999). 0277-786X CrossRef
Eeo  P. C., and Heeger  D. J., “Perceptual image distortion,” Proc. SPIE. 2179, , 127 –141 (1994). 0277-786X CrossRef
Qang  Z., , Bovik  A.C., and Lu  L., “Why is image quality assessment so difficult,” in  Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing , pp. 3313 –3316 (2002).CrossRef
Osberger  W., , Bergmann  N., and Maeder  A., “An automatic image quality assessment technique incorporating high level perceptual factors,” in  Proc. IEEE Int. Conf. Image Processing , pp. 414 –418 (1998).CrossRef
Watson  A. B., , Hu  J., and McGowan  J. F.  III, “DVQ: a digital video quality metric based on human vision,” J. Electron. Imaging. 10, (1 ), 20 –29 (2001). 1017-9909 CrossRef
Watson  A. B.  et al., “Visibility of wavelet quantization noise,” IEEE Trans. Image Process.. 6, (8 ), 1164 –1175 (1997). 1057-7149 CrossRef
Lai  Y. K., and Kuo  C. C. J., “A Haar wavelet approach to compressed image quality measurement,” J. Visual Commun. Image Represent.. 11, (1 ), 17 –40 (2000). 1047-3203 CrossRef
Watson  A. B., “DCT quantization matrices visually optimized for individual images,” Proc. SPIE. 1913, , 202 –216 (1993). 0277-786X CrossRef
Xu  W., and Hauske  G., “Picture quality evaluation based on error segmentation,” Proc. SPIE. 2308, , 1454 –1465 (1994). 0277-786X CrossRef
Wang  Z., and Bovik  A.C., “A universal image quality index,” IEEE Signal Process Lett.. 9, (3 ), 81 –84 (2002).CrossRef
Wang  Z., and Simoncelli  E. P., “Translation insensitive image similarity in complex wavelet domain,” in  Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing , pp. 573 –576 (2005).CrossRef
Al-Najjar  Y. A. Y., and Soong  D. C., “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res.. 3, (8 ), I041 –I045 (2012).
Wang  Z., , Lu  L., and Bovik  A. C., “Video quality assessment based on structural distortion measurement,” Signal Process. Image Commun.. 19, (2 ), 121 –132 (2004). 0923-5965 CrossRef
Ma  K.  et al., “Objective quality assessment for color-to-gray image conversion,” IEEE Trans. Image Process.. 24, (12 ), 4673 –4685 (2015). 1057-7149 CrossRef
Myers  J. L., and Well  A. D., Research Design and Statistical Analysis. , 2nd ed., p. 508 ,  Lawrence Erlbaum Associates ,  New Jersey  (2003).

Daeho Lee received his MS and PhD degrees in electronics engineering from Kyung Hee University, Republic of Korea, in 2001 and 2005, respectively. He has been an associate professor in the Humanities College at Kyung Hee University, Republic of Korea, since 2005. His research interests include computer vision, pattern recognition, machine learning, image processing, image fusion, 3-D image reconstruction, computer games, ITS, HCI, electrical impedance tomography analysis, and digital signal processing.

Sungsoo Lim received his BS degrees in electronics and radio engineering and biomedical engineering and his MS degree in electronics and radio engineering from Kyung Hee University, Republic of Korea, in 2014 and 2016. He is currently pursuing his PhD in electronic engineering at the Kyung Hee University. His research interests include computer vision, image processing, intelligent transportation systems (ITS), human computer interaction (HCI), and medical image processing.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Daeho Lee and Sungsoo Lim
"Improved structural similarity metric for the visible quality measurement of images", J. Electron. Imaging. 25(6), 063015 (Dec 07, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.6.063015


Figures

Graphic Jump Location
Fig. 1
F1 :

Diagram of the SSIM measurement system.

Graphic Jump Location
Fig. 2
F2 :

Comparison of image similarity.

Graphic Jump Location
Fig. 3
F3 :

Comparison of the original, ST, and JPEG compression image.

Graphic Jump Location
Fig. 4
F4 :

Diagram of the proposed ISSIM-S measurement system.

Graphic Jump Location
Fig. 5
F5 :

Comparison of image similarity (from left to right: the evaluating images of Fig. 2, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 6
F6 :

Comparison of image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 7
F7 :

Comparison of image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 8
F8 :

Comparison of “Einstein” image similarity (from left to right: the evaluating images, index maps of the SSIM, and index maps of the ISSIM-S).

Graphic Jump Location
Fig. 9
F9 :

Comparison of image similarity for different distortion levels (the numerics in parentheses indicate filter sizes of MF, quality factors of JPEG compression, and pixel amounts of ST).

Graphic Jump Location
Fig. 10
F10 :

Comparison of image similarity for various scene contents.

Graphic Jump Location
Fig. 11
F11 :

Comparison of image similarity for various combinations of degradations.

Graphic Jump Location
Fig. 12
F12 :

Variations of MSSIM and MISSIM-S in terms of the size of the Gaussian window.

Tables

Table Grahic Jump Location
Table 1Comparison of MSSIM and its components with MSSIM-S and its components about Fig. 3.
Table Grahic Jump Location
Table 2Comparison of the PSNR, mean of the SSIM, mean of the ISSIM-S, and MOS rank of “Lena” image (the rank for each metric is shown in parentheses).
Table Grahic Jump Location
Table 3Comparison of the PSNR, mean of the SSIM, mean of the ISSIM-S, and MOS rank of “Einstein” image (the rank for each metric is shown in parentheses).
Table Grahic Jump Location
Table 4Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for different distortion levels.
Table Grahic Jump Location
Table 5Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for different scene contents.
Table Grahic Jump Location
Table 6Comparison of the PSNR, mean of the SSIM, and mean of the ISSIM-S for various combinations of degradations.

References

Kim  U. S., and Sunwoo  M. H., “New frame rate up-conversion algorithm with low computational complexity,” IEEE Trans. Circuits Syst. Video Technol.. 24, (3 ), 384 –393 (2014).CrossRef
Babu  R. V., , Suresh  S., and Pekis  A., “No-reference JPEG-image assessment using GAP_RBF,” Signal Process.. 87, (6 ), 1493 –1503 (2007).CrossRef
Shnaydeman  A., , Gusev  A., and Eskicioglu  A. M., “An SVD-based grayscale image quality measure for local and global assessment,” IEEE Trans. Image Process.. 15, (2 ), 422 –429 (2006). 1057-7149 CrossRef
Freeman  W. T., , Jones  T. R., and Pasztor  E. C., “Example-based super-resolution,” IEEE Comput. Graph. Appl.. 22, (2 ), 56 –65 (2003). 0272-1716 CrossRef
Schultz  R. R., , Meng  L., and Stevenson  R. L., “Subpixel motion estimation for super-resolution image sequence enhancement,” J. Visual Commun. Image Represent.. 9, (1 ), 38 –50 (1998). 1047-3203 CrossRef
Chang  S. G., , Yu  B., and Vetterli  M., “Spatially adaptive wavelet thresholding with context modeling for image denoising,” IEEE Trans. Image Process.. 9, (9 ), 1522 –1531 (2000). 1057-7149 CrossRef
Buades  A., , Coll  B., and Morel  J. M., “A non-local algorithm for image denoising,” in  Proc. IEEE CS Conf. Computer Vision and Pattern Recognition , pp. 60 –65 (2005).CrossRef
Wang  Z.  et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process.. 13, (4 ), 600 –612 (2004). 1057-7149 CrossRef
Eckert  M. P., and Bradley  A. P., “Perceptual quality metrics applied to still image compression,” Signal Process.. 70, (3 ), 177 –200 (1998).CrossRef
Eskicioglu  A. M., and Fisher  P. S., “Image quality measures and their performance,” IEEE Trans. Commun.. 43, (12 ), 2959 –2965 (1995).CrossRef
Winkler  S., “A perceptual distortion metric for digital color video,” Proc. SPIE. 3644, , 175 –184 (1999). 0277-786X CrossRef
Eeo  P. C., and Heeger  D. J., “Perceptual image distortion,” Proc. SPIE. 2179, , 127 –141 (1994). 0277-786X CrossRef
Qang  Z., , Bovik  A.C., and Lu  L., “Why is image quality assessment so difficult,” in  Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing , pp. 3313 –3316 (2002).CrossRef
Osberger  W., , Bergmann  N., and Maeder  A., “An automatic image quality assessment technique incorporating high level perceptual factors,” in  Proc. IEEE Int. Conf. Image Processing , pp. 414 –418 (1998).CrossRef
Watson  A. B., , Hu  J., and McGowan  J. F.  III, “DVQ: a digital video quality metric based on human vision,” J. Electron. Imaging. 10, (1 ), 20 –29 (2001). 1017-9909 CrossRef
Watson  A. B.  et al., “Visibility of wavelet quantization noise,” IEEE Trans. Image Process.. 6, (8 ), 1164 –1175 (1997). 1057-7149 CrossRef
Lai  Y. K., and Kuo  C. C. J., “A Haar wavelet approach to compressed image quality measurement,” J. Visual Commun. Image Represent.. 11, (1 ), 17 –40 (2000). 1047-3203 CrossRef
Watson  A. B., “DCT quantization matrices visually optimized for individual images,” Proc. SPIE. 1913, , 202 –216 (1993). 0277-786X CrossRef
Xu  W., and Hauske  G., “Picture quality evaluation based on error segmentation,” Proc. SPIE. 2308, , 1454 –1465 (1994). 0277-786X CrossRef
Wang  Z., and Bovik  A.C., “A universal image quality index,” IEEE Signal Process Lett.. 9, (3 ), 81 –84 (2002).CrossRef
Wang  Z., and Simoncelli  E. P., “Translation insensitive image similarity in complex wavelet domain,” in  Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing , pp. 573 –576 (2005).CrossRef
Al-Najjar  Y. A. Y., and Soong  D. C., “Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI,” Int. J. Sci. Eng. Res.. 3, (8 ), I041 –I045 (2012).
Wang  Z., , Lu  L., and Bovik  A. C., “Video quality assessment based on structural distortion measurement,” Signal Process. Image Commun.. 19, (2 ), 121 –132 (2004). 0923-5965 CrossRef
Ma  K.  et al., “Objective quality assessment for color-to-gray image conversion,” IEEE Trans. Image Process.. 24, (12 ), 4673 –4685 (2015). 1057-7149 CrossRef
Myers  J. L., and Well  A. D., Research Design and Statistical Analysis. , 2nd ed., p. 508 ,  Lawrence Erlbaum Associates ,  New Jersey  (2003).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.