Regular Articles

Median filtering detection using variation of neighboring line pairs for image forensics

[+] Author Affiliations
Kang Hyeon Rhee

Chosun University, Department of Electronics Engineering, Gwangju 61452, Republic of Korea

J. Electron. Imaging. 25(5), 053039 (Oct 27, 2016). doi:10.1117/1.JEI.25.5.053039
History: Received June 25, 2016; Accepted September 28, 2016
Text Size: A A A

Open Access Open Access

Abstract.  Attention to tampering by median filtering (MF) has recently increased in digital image forensics. For the MF detection (MFD), this paper presents a feature vector that is extracted from two kinds of variations between the neighboring line pairs: the row and column directions. Of these variations in the proposed method, one is defined by a gradient difference of the intensity values between the neighboring line pairs, and the other is defined by a coefficient difference of the Fourier transform (FT) between the neighboring line pairs. Subsequently, the constructed 19-dimensional feature vector is composed of these two parts. One is the extracted 9-dimensional from the space domain of an image and the other is the 10-dimensional from the frequency domain of an image. The feature vector is trained in a support vector machine classifier for MFD in the altered images. As a result, in the measured performances of the experimental items, the area under the receiver operating characteristic curve (AUC, ROC) by the sensitivity (PTP: the true positive rate) and 1-specificity (PFP: the false-positive rate) are above 0.985 and the classification ratios are also above 0.979. Pe (a minimal average decision error) ranges from 0 to 0.024, and PTP at PFP=0.01 ranges from 0.965 to 0.996. It is confirmed that the grade evaluation of the proposed variation-based MF detection method is rated as “Excellent (A)” by AUC is above 0.9.

Figures in this Article

In the image alterations for the forgeries, the tampering uses compression, filtering, averaging, rotating, mosaic editing, and updownscaling. In particular, the median filtering (MF) is preferred among some forgers because it has the characteristics of nonlinear filtering based on order statistics. Furthermore, MF detection could classify the altered images with MF.1 Consequently, the prior studies25 emphasized that the MF detector becomes a significant forensic tool for the recovery of the processing history of a forgery image.

To extract the 10 features for MF detection (MFD), Kang et al.2 obtained autoregressive (AR) coefficients as feature vectors via an AR model to analyze the median filter residual (MFR AR), which is the difference between the values of the original image and those of the median-filtered image. The authors analyzed an image’s MFR AR; it is able to suppress image content that may interfere with MFD.

Yuan3 proposed the median filtering forensics (MFF) feature as a combination of five feature subsets which implied an order statistics and the gray levels to capture the local dependence artifact introduced by MF because to the two-dimensional median filter affects either the order or the quantity of the gray levels in an image region. The MFF method employed five entries for the feature set extraction. This is done by extracting a set of 44 features from an image. These sets include features such as the distribution of the block median pixel value and the distribution of the number of distinct gray levels within a window. The experimental results of the MFF in Ref. 3 can achieve comparable or better performance than the subtractive pixel adjacency matrix (SPAM)-based method4 in the case of high- and medium-quality factor JPEG postcompression and low-resolution JPEG images. However, as with Kirchner and Fridrich’s technique in Ref. 4, the performance of Yuan’s technique in Ref. 3 decreases as the JPEG quality factor is lowered or as the image size examined shrinks.

The computing time to extract the MFF feature vector using the combined various entries is long, and both the performances of the SPAM-based and the MFF-based detectors degrade depending on the size of the analyzed the up- or downscaling images. Thus, there is a need to develop a more reliable method to detect MF in the case of up- and downscaling, which is more desirable in practical applications.

In this paper, a new variation-based MFD method is proposed in which the feature vector is constructed using two kinds of variations between the neighboring line pairs in a digital image. One is extracted in the space domain, and the other is extracted in the frequency domain.

The rest of the paper is organized as follows. Section 2 briefly presents the theoretical background of the MFR AR and the MFF method. In Sec. 3, the variation of neighboring line pairs for the extraction of the feature vector is computed, and it describes the composition of the new feature vector. The experimental results of the proposed method are shown in Sec. 4. The performance evaluation is compared with the MFR AR and the MFF, and those results are followed by some discussions. Finally, conclusions are drawn.

Feature Set for Median Filtering Detection

The median pixel values obtained from overlapping filter windows are related to one another as the overlapping windows share several pixels in common. According to this fact, most state-of-the-art MF detectors25 for the feature vector used different lengths and the employed some methods for the extraction of the feature set. To extend the length of the feature vector to increase the classification ratio between the altered and unaltered images, the extraction time of the feature set and the training-testing time both take longer. In Ref. 5, kernel principal component analysis is used to reduce the length of the feature vector, therefore, additional computational time is required. Most MF detectors employ several conventional extraction methods that are processed in the space/frequency domain or use the statistical theory.

Median Filter Residual Method

Kang et al.2 used the 10-dimensional (10-D) feature vector, which was extracted from the AR coefficients of the difference in the images between the original and its MF image. They computed an MFR. In Ref. 2, the authors attempted to reduce interference from an image’s edge content and the block artifacts from JPEG compression. They proposed gathering detection features from the MFR.

The image between the original and its MF image is used to construct the AR model. The difference is referred to as the MFR, which is formally defined as Display Formula

d(i,j)=medw[y(i,j)]y(i,j)=z(i,j)y(i,j),(1)
where (i,j) is a pixel coordinate and w is an MF window size.

Subsequently, AR coefficients are computed as Display Formula

ak(r)=AR[mean(d(r))],(2)
Display Formula
ak(c)=AR[mean(d(c))],(3)
Display Formula
ak=(ak(r)+ak(c))/2,(4)
where r and c mean the row and column directions, respectively, and k is the AR order number, 1kp, and p is the maximum order number. In Eq. (4), the AR coefficients in both directions are averaged to obtain a single, one-dimensional AR model.

Again, the AR coefficients are for the difference in images according to the following: Display Formula

d(i,j)=q=1pak(r)d(i,jq)+ϵ(r)(i,j)(5)
Display Formula
d(i,j)=q=1pak(c)d(iq,j)+ϵ(c)(i,j),(6)
where ϵ(r)(i,j) and ϵ(c)(i,j) are the prediction errors6 in the row direction and column direction, respectively, and q is a surrounding range of (i,j).

Median Filtering Forensics Method

Yuan3 proposed detecting an MF by measuring the relationships among the pixels within a 3×3  pixel window from an image. The author believed that in a median-filtered image, the gray level of the block center should occur more frequently in the block after MF. For this reason, the sets include features such as the distribution of the block median pixel value and the distribution of the number of distinct gray levels within a window. Moreover, the author proposed a median filter detector that collects blockwise the MFF, which is statistically based on the pixel values and their distribution on the block. A set of five features in the MFF is extracted from a 3×3  pixel nonoverlapping block:

  1. Distribution of the block median (DBM), denoted with hDBM, accounts for the fact that, in median-filtered images, gray levels in a small block tend to be equal to the block median.
  2. Occurrence of the block-center gray level (OBC), denoted with hOBC, accounts for the fact that the gray level of the block center should occur more frequently in the block after MF.
  3. Quantity of gray levels in a block (QGL), denoted with hQGL, since the median filter reduces noise without introducing new gray levels, make it likely that, after filtering, the number of different gray levels in each block is decreased.
  4. Distribution of the block-center gray level in the sorted gray levels (DBC), denoted with hDBC, considers the frequency of the block-center gray level in the sorted gray levels.
  5. First occurrence of the block-center gray level in the sorted gray levels (FBC), denoted with hFBC, simply considers the first occurrence of the block-center gray level in sorted gray level.

A direct combination of these feature subsets results in a 44-dimensional feature vector (note that the h5DBC and the h5DBM are equivalent) Display Formula

hMFF=(hDBM,hOBC,hQGL,hDBC,hFBC),
which is used for MFF.

Different features of the MFF are then combined heuristically to produce a new index f, and MFD is then obtained by simply thresholding fDisplay Formula

f=h5DBMh2OBCh6QGL(h3DBC+h7DBCh2DBCh8DBC)h3FBCh1OBCh9QGL(h2DBC+h8DBCh1DBCh9DBC)h2FBCh9FBC.(7)
A binary decision is then made according to the index f to determine whether the image has undergone MF.

Variation of Neighboring Line Pairs

In an image x, the intensity gradient between the neighboring line pairs (the row and column directions) are defined as G(r) and G(c)1, respectively, as follows: Display Formula

G(r)(i)=2·x(i)x(i1)x(i+1),(8)
Display Formula
G(c)(j)=2·x(j)x(j1)x(j+1),(9)
Display Formula
Gk=desc[mean(G(r))+mean(G(c))]/2,(10)
where r and c are row and column directions, respectively, k, mean, and desc are feature dimension length, average, and descending order, respectively. This averages the G in both directions to obtain a single dimension.

Furthermore, the row and column differences of the Fourier transform (FT) coefficient (FTcoeff) between the neighboring line pairs are defined as F(r) and F(c), respectively, in the same manner of Eq. (8)–(10), to as follows: Display Formula

F(r)(i)=2·FTcoeff[x(i)]FTcoeff[(i1)]FTcoeff[x(i+1)],(11)
Display Formula
F(c)(j)=2·FTcoeff[x(j)]FTcoeff[x(j1)]FTcoeff[x(j+1)],(12)
Display Formula
Fk=desc[mean(F(r))+mean(F(c))]/2.(13)
Also, as in Eq. (10), it averages the F in both directions to obtain a single dimension.

Feature Vector Composition

The feature set for the proposed variation-based MFD method is composed of 19-dimensions (19-D). The k in Gk is set to be 1 to 9, which are the nine most significant values by descending order of Gk, and the k in Fk is set to be 1 to 10, the 10 most significant values by descending order of Fk. Both ks are defined as the feature vector length based the variations of G and F.

The first 9-dimensional (9-D) part is related to the variation that is the differences between the intensity values between neighboring line pairs. The nature of the feature and length is similar to the MFF OBC3 according to the space domain processing. The second 10-D part is related to the variations that are the differences between the coefficient values of FT between neighboring line pairs. The nature of the feature and length is similar to the MFR AR2 according to the frequency domain processing.

The proposed complete MFD technique can be summarized as follows:

  • A gradient difference value of the neighboring line pairs by Eq. (10) and a coefficient difference value of the FT of the neighboring line pairs by Eq. (13) are computed.
  • Compute the variations between the row and column neighboring line pairs in an image.
  • Define 19-D feature vector, which is composed of two kinds. The first is the most significant nine variation values by descending order from the differences of the gradient, and the other is the most significant 10 variation values by descending order from the FT coefficient in step 2.
  • Input the 19-D feature vector in step 3 to a support vector machine (SVM) training to classify the median filtered image and the other image types that are unaltered or altered.

In this section, the experimental methodology used in the experiments is described, and then compared to the experimental results of the proposed variation-based MF detector to the MFR AR and the MFF methods to verify the effectiveness of the proposed method. The MFR AR which has a very short 10-D feature vector and the MFF have good performances in the existing MF detectors, so they are the best to compare.

Experimental Methodology

SVM training and testing was performed by inputting the constructed 19-D feature vector to an SVM classifier for the training of the MF classification. C-SVM7 with Gaussian kernel is employed as the classifier Display Formula

K(xi,xj)=exp(γxixj2)(γ>0).(14)
Moreover, it trained in an SVM classifier with fourfold cross-validation in conjunction with a grid search for the best parameters of C and γ in the multiplicative grid Display Formula
(C,γ){(2i,2j)|4×i,4×jZ}.(15)
The searching step size of (i,j) is 0.25, and then those parameters are used to get the classifier model on the training set.

The experiments prepared the following image database:

  • The BOSE2 image database (June 2016)8 consisted of 10,000 downsampled and cropped natural grayscale images of a fixed size of 512×512  pixels.
  • The UCID image database9 consists of 1338 uncompressed color images of size 512×384 or 384×512  pixels.
  • The SAM image database (June 2016)10 which is a raw image database containing 5150 uncompressed raw color images of the size of 256×256  pixels.

This results in the image database and, where necessary, the images were converted to 8-bit grayscale images and used in the experiments.

For the effective measurement of the proposed method in the experiment, four kinds of test items, area under curve (AUC), the classification ratio, the minimal average decision error (Pe), and PTP at PFP=0.01 (PTP and PFP denote the true-positive and false-positive rates, respectively). Also, the classified rate of the experimental AUC results is interpreted using the traditional academic point system. The definition of AUC grade is described in Ref. 11Display Formula

Pe=min(PFP+1=PTP2).(16)
BOWS2 10,000 images, UCID 1388 images, and SAM 5150 images are used for MFD, and the test image types are prepared as ORI (unaltered), MF3 (median window size: 3×3), MF5 (median window size: 5×5), MF35 (“35” denotes the MF3 and MF5) composed of the randomly selected images (each 5000 in the MF3 and MF5 of BOWS2, each 694 in the MF3 and MF5 of UCID, and each 2575 in the MF3 and MF5 of SAM, respectively), JPEG (QF=90), downscaling (0.6), upscaling (1.5), and upscaling (2.0), respectively.

Subsequently, the trained classifier model is used to perform the classification on the testing set. Among the total of 16,538 images, 10,000 images are selected randomly in training, and the other 6538 images are for testing. Before an SVM classifier is trained for the MFD, it prepared the MFw (w{3,5,35},) for the positive data, and the negative data are three groups, A, B, and C, as follows:

  1. Group A: the unaltered and the altered just once images:
    • ORI
    • JPG90
    • DN0.6
    • UP1.5
    • UP2.0
  2. Group B: postaltered after MF3:
    • MF3 + DN.9
    • MF3 + DN.6
    • MF3 + UP1.1
    • MF3 + UP1.5
    • MF3 + JPG90
  3. Group C: postaltered after MF5:
    • MF5+DN.9
    • MF5+DN.6
    • MF5+UP1.1
    • MF5+UP1.5
    • MF5+JPG90

Experimental Results

The proposed method compared with existing works: the MFR AR2 and the MFF methods.3 For the proposed method, the experiments were conducted using the MATLAB® (R2015a) tools on PC environment (the 64-bit version of Windows 7, Intel® core™ i7-5960X CPU @ 3.00 GHz and DDR4 32 GB memory).

The extracted feature set distribution of variety images from the proposed variation-based MF detector is shown in Fig. 1. The JPEG (QF=90) is very similar to the original image, but the rest of the images are quite different from each other. Figures 2 and 3 show the variations of the gradient difference and the coefficient difference of the neighboring line pairs by Eqs. (10) and (13), respectively.

Graphic Jump Location
Fig. 1
F1 :

Distribution of the features extracted from different types of sample images: the group images (a) A, (b) B, (c) and C.

Graphic Jump Location
Fig. 2
F2 :

Variations of the gradients differences.

Graphic Jump Location
Fig. 3
F3 :

Variations of the FT coefficient differences by Eq. (13).

In Fig. 4, receiver operating characteristic (ROC) curves show each performance of MFw versus the test image groups A, B, and C, on the MFR AR method. In this part, the MFR AR method shows the best performance of the MFD against the UP, while, the performances of the DN, JPG, and ORI are relatively lower.

Graphic Jump Location
Fig. 4
F4 :

ROC curves of the MFR AR method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

In Fig. 5, ROC curves show each performance on the MFF method. In this part, the MFF method shows the best performance of the MFD in the JPEG and DN, but the performance is relatively low in UP.

Graphic Jump Location
Fig. 5
F5 :

ROC curves of the MFF method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

In Fig. 6, the proposed method exhibits excellent performance on all MFw versus almost test image groups, except for MF5 versus ORI. It conducted performance evaluation and theoretical analysis for the MFD in the various altered image types.

Graphic Jump Location
Fig. 6
F6 :

ROC curves of the proposed variation-based MFD method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

Table 1 shows experimental results of MFw and the test image groups A, B, and C, which presented as (a), (b), and (c), respectively. In this table, there are four kinds of test terms: AUC, the classification ratio, Pe, and PTP at PFP=0.01. As a result, the AUC and the classification ratio are both above 0.9. Pe ranges from 0.003 to 0.027, and PTP at PFP=0.01 ranges from 0.965 to 0.996.

Table Grahic Jump Location
Table 1Performance comparison between the MFR AR, MFF, and the proposed method. (The best result for each training–testing pair is displayed in bold type as a whole.) No. (experimental result item) 1: AUC, 2: classification ratio, 3: Pe, 4: PTP at PFP=0.01.

Moreover, the ROC curves of the proposed method for the many types of the test image are relatively closer to each other, which indicates the more consistent classification performance of the proposed method. Overall, the performance is excellent at unaltered (original), JPEG (QF=90) compression, downscaling (0.6), and upscaling (1.5 and 2.0) images on the MF3, the MF5, and the MF35 detections. However, in the proposed variation-based MFD methods, despite the 19-D short length of the feature vector, the performance results of the AUCs approached 1. Thus, it is confirmed that the grade evaluation of the proposed algorithm is rated as “Excellent (A).” [The classified rate of the experimental AUC results is interpreted using the traditional academic point system11 (June 2016).] In this evaluation, it uses the terms of general interpretation AUC for each training–testing pair.

Subsequently, the testing of the MF to detect low-resolution images will be examined. A small image window size is a requirement for detecting forgeries in a median-filtered image or modified JPEG pre- or/and postcompression. An example of a cut-and-paste forgery image is shown in Fig. 7. An unaltered image (window) is cut, and a median-filtered image (house) is pasted onto the cut area (white region) of the unaltered image (those unaltered images come from the BOWS2 database), forming a composite image, which was then JPEG postcompressed using a quality factor of 90, rotated counterclockwise by 5 deg and added salt and pepper noise by 0.05 density. Figures 8910 show the detection blocks of MF with the MFR AR, the MFF, and the proposed method, respectively. The detected blocks that are median-filtered (the true positives) are marked in red, and the remaining blocks are marked in blue (the false alarms). (The color version of the paper is available online.) In Figs. 8910, the left column (a, c, e, and g) is examined in a 32×32 block size, and the right column (b, d, f, and h) is examined in a 64×64 block size. The first row (a and b) shows the detection results in MF3 versus unaltered images, the second row (c and d) shows the detection results in MF3 + JPG90 versus JPG90 images, the third row (e and f) shows the detection results in MF3 versus unaltered to rotated images, and the last row (g and h) shows the detection results in MF3 versus unaltered to noisy images.

Graphic Jump Location
Fig. 7
F7 :

Cut and paste forgery image example.

Graphic Jump Location
Fig. 8
F8 :

Local MFD results using the MFR AR method.

Graphic Jump Location
Fig. 9
F9 :

Local MFD results using the MFF method.

Graphic Jump Location
Fig. 10
F10 :

Local MFD results using the proposed method.

In Fig. 8, the MFR AR method does not perform well for a 32×32 size image, and it performs only slightly better for MF3 versus unaltered images on a 64×64 size image. In Fig. 9, the MFF method performed well for MF3 versus unaltered images and rotated one for both 32×32 and 64×64 size images. Meanwhile, the corresponding forensic detection does not provide good results in JPEG postcompression. In Fig. 10, the MFD of the proposed method with a 19-D feature vector is supreme for MF3 versus unaltered images, under JPEG postcompression, and its rotated and noisy one for both 32×32 and 64×64 size images, respectively.

This paper proposed a variation-based MFD method, the constructed feature vector that was composed of two kinds of variations from the space and the frequency domain in an image. The extracted one is computed on the gradient differences between the neighboring line pairs, and the other is computed on the FT coefficient differences.

All of that increases the experimental results in the MFD. To the best of our knowledge, this is the first complete solution for the variation between the neighboring line pairs of a digital image. So this will serve as additional research content for MFD. Future work should consider a performance evaluation of the smaller size as an altered image. Finally, the proposed variation-based method can be applied to solve different forensic problems, such as the previous MFD methods.

This work has been supported by the research grant (322386) of Chosun University, Republic of Korea, in 2015.

Rhee  K. H., “Median filtering detection using variation of neighboring line pairs for image forensic,” in  IEEE 5th Int. Conf. on Consumer Electronics-Berlin (ICCE-Berlin) , pp. 103 –107 (2015).CrossRef
Kang  X.  et al., “Robust median filtering forensics using an autoregressive model,” IEEE Trans. Inf. Forensics Secur.. 8, (9 ), 1456 –1468 (2013).CrossRef
Yuan  H., “Blind forensics of median filtering in digital images,” IEEE Trans. Inf. Forensics Secur.. 6, (4 ), 1335 –1345 (2011).CrossRef
Pevný  T., , Bas  P., and Fridrich  J., “Steganalysis by subtractive pixel adjacency matrix,” IEEE Trans. Inf. Forensics Secur.. 5, (2 ), 215 –224 (2010).CrossRef
Zhang  Y.  et al., “Revealing the traces of median filtering using high-order local ternary patterns,” Signal Process. Lett. IEEE. 21, (3 ), 275 –279 (2014).CrossRef
Kay  S. M., Modern Spectral Estimation: Theory and Application. ,  Prentice-Hall ,  Englewood Cliffs, New Jersey  (1998).
Chang  C. C., and Lin  C. J., “LIBSVM: a library for support vector machines,” https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ (24  April  2016).
ECRYPT, , “Break our watermarking system,” 2010, http://bows2.ec-lille.fr/ (24  April  2016).
Schaefer  G., and Stich  M., “UCID—an uncompressed color image database,” Proc. SPIE. 5307, , 472 –480 (2004). 0277-786X CrossRef
Liu  Q., and Chen  Z., “Seam-carving image database,” 2014, http://www.shsu.edu/~qxl005/New/Downloads/index.html (24  April  2016).
Tape  T. G., “The area under an ROC Curve,” http://gim.unmc.edu/dxtests/roc3.htm (24  April  2016).

Kang Hyeon Rhee is with the Department of Electronics Engineering, Chosun University, Gwangju, Republic of Korea. His current research interests include embedded system design related to multimedia fingerprinting/forensics. He is on the Committee of the LSI Design Contest in Okinawa, Japan. He is also the recipient of awards such as the Haedong Prize from the Haedong Science and Culture Juridical Foundation, Korea, which he received in 2002 and 2009.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Kang Hyeon Rhee
"Median filtering detection using variation of neighboring line pairs for image forensics", J. Electron. Imaging. 25(5), 053039 (Oct 27, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.5.053039


Figures

Graphic Jump Location
Fig. 1
F1 :

Distribution of the features extracted from different types of sample images: the group images (a) A, (b) B, (c) and C.

Graphic Jump Location
Fig. 2
F2 :

Variations of the gradients differences.

Graphic Jump Location
Fig. 3
F3 :

Variations of the FT coefficient differences by Eq. (13).

Graphic Jump Location
Fig. 4
F4 :

ROC curves of the MFR AR method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

Graphic Jump Location
Fig. 5
F5 :

ROC curves of the MFF method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

Graphic Jump Location
Fig. 6
F6 :

ROC curves of the proposed variation-based MFD method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

Graphic Jump Location
Fig. 7
F7 :

Cut and paste forgery image example.

Graphic Jump Location
Fig. 8
F8 :

Local MFD results using the MFR AR method.

Graphic Jump Location
Fig. 9
F9 :

Local MFD results using the MFF method.

Graphic Jump Location
Fig. 10
F10 :

Local MFD results using the proposed method.

Tables

Table Grahic Jump Location
Table 1Performance comparison between the MFR AR, MFF, and the proposed method. (The best result for each training–testing pair is displayed in bold type as a whole.) No. (experimental result item) 1: AUC, 2: classification ratio, 3: Pe, 4: PTP at PFP=0.01.

References

Rhee  K. H., “Median filtering detection using variation of neighboring line pairs for image forensic,” in  IEEE 5th Int. Conf. on Consumer Electronics-Berlin (ICCE-Berlin) , pp. 103 –107 (2015).CrossRef
Kang  X.  et al., “Robust median filtering forensics using an autoregressive model,” IEEE Trans. Inf. Forensics Secur.. 8, (9 ), 1456 –1468 (2013).CrossRef
Yuan  H., “Blind forensics of median filtering in digital images,” IEEE Trans. Inf. Forensics Secur.. 6, (4 ), 1335 –1345 (2011).CrossRef
Pevný  T., , Bas  P., and Fridrich  J., “Steganalysis by subtractive pixel adjacency matrix,” IEEE Trans. Inf. Forensics Secur.. 5, (2 ), 215 –224 (2010).CrossRef
Zhang  Y.  et al., “Revealing the traces of median filtering using high-order local ternary patterns,” Signal Process. Lett. IEEE. 21, (3 ), 275 –279 (2014).CrossRef
Kay  S. M., Modern Spectral Estimation: Theory and Application. ,  Prentice-Hall ,  Englewood Cliffs, New Jersey  (1998).
Chang  C. C., and Lin  C. J., “LIBSVM: a library for support vector machines,” https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ (24  April  2016).
ECRYPT, , “Break our watermarking system,” 2010, http://bows2.ec-lille.fr/ (24  April  2016).
Schaefer  G., and Stich  M., “UCID—an uncompressed color image database,” Proc. SPIE. 5307, , 472 –480 (2004). 0277-786X CrossRef
Liu  Q., and Chen  Z., “Seam-carving image database,” 2014, http://www.shsu.edu/~qxl005/New/Downloads/index.html (24  April  2016).
Tape  T. G., “The area under an ROC Curve,” http://gim.unmc.edu/dxtests/roc3.htm (24  April  2016).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.