Open Access
27 October 2016 Median filtering detection using variation of neighboring line pairs for image forensics
Author Affiliations +
Abstract
Attention to tampering by median filtering (MF) has recently increased in digital image forensics. For the MF detection (MFD), this paper presents a feature vector that is extracted from two kinds of variations between the neighboring line pairs: the row and column directions. Of these variations in the proposed method, one is defined by a gradient difference of the intensity values between the neighboring line pairs, and the other is defined by a coefficient difference of the Fourier transform (FT) between the neighboring line pairs. Subsequently, the constructed 19-dimensional feature vector is composed of these two parts. One is the extracted 9-dimensional from the space domain of an image and the other is the 10-dimensional from the frequency domain of an image. The feature vector is trained in a support vector machine classifier for MFD in the altered images. As a result, in the measured performances of the experimental items, the area under the receiver operating characteristic curve (AUC, ROC) by the sensitivity (PTP: the true positive rate) and 1-specificity (PFP: the false-positive rate) are above 0.985 and the classification ratios are also above 0.979. Pe (a minimal average decision error) ranges from 0 to 0.024, and PTP at PFP=0.01 ranges from 0.965 to 0.996. It is confirmed that the grade evaluation of the proposed variation-based MF detection method is rated as “Excellent (A)” by AUC is above 0.9.

1.

Introduction

In the image alterations for the forgeries, the tampering uses compression, filtering, averaging, rotating, mosaic editing, and updownscaling. In particular, the median filtering (MF) is preferred among some forgers because it has the characteristics of nonlinear filtering based on order statistics. Furthermore, MF detection could classify the altered images with MF.1 Consequently, the prior studies25 emphasized that the MF detector becomes a significant forensic tool for the recovery of the processing history of a forgery image.

To extract the 10 features for MF detection (MFD), Kang et al.2 obtained autoregressive (AR) coefficients as feature vectors via an AR model to analyze the median filter residual (MFR AR), which is the difference between the values of the original image and those of the median-filtered image. The authors analyzed an image’s MFR AR; it is able to suppress image content that may interfere with MFD.

Yuan3 proposed the median filtering forensics (MFF) feature as a combination of five feature subsets which implied an order statistics and the gray levels to capture the local dependence artifact introduced by MF because to the two-dimensional median filter affects either the order or the quantity of the gray levels in an image region. The MFF method employed five entries for the feature set extraction. This is done by extracting a set of 44 features from an image. These sets include features such as the distribution of the block median pixel value and the distribution of the number of distinct gray levels within a window. The experimental results of the MFF in Ref. 3 can achieve comparable or better performance than the subtractive pixel adjacency matrix (SPAM)-based method4 in the case of high- and medium-quality factor JPEG postcompression and low-resolution JPEG images. However, as with Kirchner and Fridrich’s technique in Ref. 4, the performance of Yuan’s technique in Ref. 3 decreases as the JPEG quality factor is lowered or as the image size examined shrinks.

The computing time to extract the MFF feature vector using the combined various entries is long, and both the performances of the SPAM-based and the MFF-based detectors degrade depending on the size of the analyzed the up- or downscaling images. Thus, there is a need to develop a more reliable method to detect MF in the case of up- and downscaling, which is more desirable in practical applications.

In this paper, a new variation-based MFD method is proposed in which the feature vector is constructed using two kinds of variations between the neighboring line pairs in a digital image. One is extracted in the space domain, and the other is extracted in the frequency domain.

The rest of the paper is organized as follows. Section 2 briefly presents the theoretical background of the MFR AR and the MFF method. In Sec. 3, the variation of neighboring line pairs for the extraction of the feature vector is computed, and it describes the composition of the new feature vector. The experimental results of the proposed method are shown in Sec. 4. The performance evaluation is compared with the MFR AR and the MFF, and those results are followed by some discussions. Finally, conclusions are drawn.

2.

Prior Median Filtering Detectors

2.1.

Feature Set for Median Filtering Detection

The median pixel values obtained from overlapping filter windows are related to one another as the overlapping windows share several pixels in common. According to this fact, most state-of-the-art MF detectors25 for the feature vector used different lengths and the employed some methods for the extraction of the feature set. To extend the length of the feature vector to increase the classification ratio between the altered and unaltered images, the extraction time of the feature set and the training-testing time both take longer. In Ref. 5, kernel principal component analysis is used to reduce the length of the feature vector, therefore, additional computational time is required. Most MF detectors employ several conventional extraction methods that are processed in the space/frequency domain or use the statistical theory.

2.2.

Median Filter Residual Method

Kang et al.2 used the 10-dimensional (10-D) feature vector, which was extracted from the AR coefficients of the difference in the images between the original and its MF image. They computed an MFR. In Ref. 2, the authors attempted to reduce interference from an image’s edge content and the block artifacts from JPEG compression. They proposed gathering detection features from the MFR.

The image between the original and its MF image is used to construct the AR model. The difference is referred to as the MFR, which is formally defined as

Eq. (1)

d(i,j)=medw[y(i,j)]y(i,j)=z(i,j)y(i,j),
where (i,j) is a pixel coordinate and w is an MF window size.

Subsequently, AR coefficients are computed as

Eq. (2)

ak(r)=AR[mean(d(r))],

Eq. (3)

ak(c)=AR[mean(d(c))],

Eq. (4)

ak=(ak(r)+ak(c))/2,
where r and c mean the row and column directions, respectively, and k is the AR order number, 1kp, and p is the maximum order number. In Eq. (4), the AR coefficients in both directions are averaged to obtain a single, one-dimensional AR model.

Again, the AR coefficients are for the difference in images according to the following:

Eq. (5)

d(i,j)=q=1pak(r)d(i,jq)+ϵ(r)(i,j)

Eq. (6)

d(i,j)=q=1pak(c)d(iq,j)+ϵ(c)(i,j),
where ϵ(r)(i,j) and ϵ(c)(i,j) are the prediction errors6 in the row direction and column direction, respectively, and q is a surrounding range of (i,j).

2.3.

Median Filtering Forensics Method

Yuan3 proposed detecting an MF by measuring the relationships among the pixels within a 3×3  pixel window from an image. The author believed that in a median-filtered image, the gray level of the block center should occur more frequently in the block after MF. For this reason, the sets include features such as the distribution of the block median pixel value and the distribution of the number of distinct gray levels within a window. Moreover, the author proposed a median filter detector that collects blockwise the MFF, which is statistically based on the pixel values and their distribution on the block. A set of five features in the MFF is extracted from a 3×3  pixel nonoverlapping block:

  • 1. Distribution of the block median (DBM), denoted with hDBM, accounts for the fact that, in median-filtered images, gray levels in a small block tend to be equal to the block median.

  • 2. Occurrence of the block-center gray level (OBC), denoted with hOBC, accounts for the fact that the gray level of the block center should occur more frequently in the block after MF.

  • 3. Quantity of gray levels in a block (QGL), denoted with hQGL, since the median filter reduces noise without introducing new gray levels, make it likely that, after filtering, the number of different gray levels in each block is decreased.

  • 4. Distribution of the block-center gray level in the sorted gray levels (DBC), denoted with hDBC, considers the frequency of the block-center gray level in the sorted gray levels.

  • 5. First occurrence of the block-center gray level in the sorted gray levels (FBC), denoted with hFBC, simply considers the first occurrence of the block-center gray level in sorted gray level.

A direct combination of these feature subsets results in a 44-dimensional feature vector (note that the h5DBC and the h5DBM are equivalent)

hMFF=(hDBM,hOBC,hQGL,hDBC,hFBC),
which is used for MFF.

Different features of the MFF are then combined heuristically to produce a new index f, and MFD is then obtained by simply thresholding f

Eq. (7)

f=h5DBMh2OBCh6QGL(h3DBC+h7DBCh2DBCh8DBC)h3FBCh1OBCh9QGL(h2DBC+h8DBCh1DBCh9DBC)h2FBCh9FBC.
A binary decision is then made according to the index f to determine whether the image has undergone MF.

3.

Proposed Median Filtering Detection Algorithm

3.1.

Variation of Neighboring Line Pairs

In an image x, the intensity gradient between the neighboring line pairs (the row and column directions) are defined as G(r) and G(c)1, respectively, as follows:

Eq. (8)

G(r)(i)=2·x(i)x(i1)x(i+1),

Eq. (9)

G(c)(j)=2·x(j)x(j1)x(j+1),

Eq. (10)

Gk=desc[mean(G(r))+mean(G(c))]/2,
where r and c are row and column directions, respectively, k, mean, and desc are feature dimension length, average, and descending order, respectively. This averages the G in both directions to obtain a single dimension.

Furthermore, the row and column differences of the Fourier transform (FT) coefficient (FTcoeff) between the neighboring line pairs are defined as F(r) and F(c), respectively, in the same manner of Eq. (8)–(10), to as follows:

Eq. (11)

F(r)(i)=2·FTcoeff[x(i)]FTcoeff[(i1)]FTcoeff[x(i+1)],

Eq. (12)

F(c)(j)=2·FTcoeff[x(j)]FTcoeff[x(j1)]FTcoeff[x(j+1)],

Eq. (13)

Fk=desc[mean(F(r))+mean(F(c))]/2.
Also, as in Eq. (10), it averages the F in both directions to obtain a single dimension.

3.2.

Feature Vector Composition

The feature set for the proposed variation-based MFD method is composed of 19-dimensions (19-D). The k in Gk is set to be 1 to 9, which are the nine most significant values by descending order of Gk, and the k in Fk is set to be 1 to 10, the 10 most significant values by descending order of Fk. Both ks are defined as the feature vector length based the variations of G and F.

The first 9-dimensional (9-D) part is related to the variation that is the differences between the intensity values between neighboring line pairs. The nature of the feature and length is similar to the MFF OBC3 according to the space domain processing. The second 10-D part is related to the variations that are the differences between the coefficient values of FT between neighboring line pairs. The nature of the feature and length is similar to the MFR AR2 according to the frequency domain processing.

The proposed complete MFD technique can be summarized as follows:

  • Step 1: A gradient difference value of the neighboring line pairs by Eq. (10) and a coefficient difference value of the FT of the neighboring line pairs by Eq. (13) are computed.

  • Step 2: Compute the variations between the row and column neighboring line pairs in an image.

  • Step 3: Define 19-D feature vector, which is composed of two kinds. The first is the most significant nine variation values by descending order from the differences of the gradient, and the other is the most significant 10 variation values by descending order from the FT coefficient in step 2.

  • Step 4: Input the 19-D feature vector in step 3 to a support vector machine (SVM) training to classify the median filtered image and the other image types that are unaltered or altered.

4.

Experimental Results

In this section, the experimental methodology used in the experiments is described, and then compared to the experimental results of the proposed variation-based MF detector to the MFR AR and the MFF methods to verify the effectiveness of the proposed method. The MFR AR which has a very short 10-D feature vector and the MFF have good performances in the existing MF detectors, so they are the best to compare.

4.1.

Experimental Methodology

SVM training and testing was performed by inputting the constructed 19-D feature vector to an SVM classifier for the training of the MF classification. C-SVM7 with Gaussian kernel is employed as the classifier

Eq. (14)

K(xi,xj)=exp(γxixj2)(γ>0).
Moreover, it trained in an SVM classifier with fourfold cross-validation in conjunction with a grid search for the best parameters of C and γ in the multiplicative grid

Eq. (15)

(C,γ){(2i,2j)|4×i,4×jZ}.
The searching step size of (i,j) is 0.25, and then those parameters are used to get the classifier model on the training set.

The experiments prepared the following image database:

  • The BOSE2 image database (June 2016)8 consisted of 10,000 downsampled and cropped natural grayscale images of a fixed size of 512×512  pixels.

  • The UCID image database9 consists of 1338 uncompressed color images of size 512×384 or 384×512  pixels.

  • The SAM image database (June 2016)10 which is a raw image database containing 5150 uncompressed raw color images of the size of 256×256  pixels.

This results in the image database and, where necessary, the images were converted to 8-bit grayscale images and used in the experiments.

For the effective measurement of the proposed method in the experiment, four kinds of test items, area under curve (AUC), the classification ratio, the minimal average decision error (Pe), and PTP at PFP=0.01 (PTP and PFP denote the true-positive and false-positive rates, respectively). Also, the classified rate of the experimental AUC results is interpreted using the traditional academic point system. The definition of AUC grade is described in Ref. 11

Eq. (16)

Pe=min(PFP+1=PTP2).
BOWS2 10,000 images, UCID 1388 images, and SAM 5150 images are used for MFD, and the test image types are prepared as ORI (unaltered), MF3 (median window size: 3×3), MF5 (median window size: 5×5), MF35 (“35” denotes the MF3 and MF5) composed of the randomly selected images (each 5000 in the MF3 and MF5 of BOWS2, each 694 in the MF3 and MF5 of UCID, and each 2575 in the MF3 and MF5 of SAM, respectively), JPEG (QF=90), downscaling (0.6), upscaling (1.5), and upscaling (2.0), respectively.

Subsequently, the trained classifier model is used to perform the classification on the testing set. Among the total of 16,538 images, 10,000 images are selected randomly in training, and the other 6538 images are for testing. Before an SVM classifier is trained for the MFD, it prepared the MFw (w{3,5,35},) for the positive data, and the negative data are three groups, A, B, and C, as follows:

  • (1) Group A: the unaltered and the altered just once images:

    • i. ORI

    • ii. JPG90

    • iii. DN0.6

    • iv. UP1.5

    • v. UP2.0

  • (2) Group B: postaltered after MF3:

    • i. MF3 + DN.9

    • ii. MF3 + DN.6

    • iii. MF3 + UP1.1

    • iv. MF3 + UP1.5

    • v. MF3 + JPG90

  • (3) Group C: postaltered after MF5:

    • i. MF5+DN.9

    • ii. MF5+DN.6

    • iii. MF5+UP1.1

    • iv. MF5+UP1.5

    • v. MF5+JPG90

4.2.

Experimental Results

The proposed method compared with existing works: the MFR AR2 and the MFF methods.3 For the proposed method, the experiments were conducted using the MATLAB® (R2015a) tools on PC environment (the 64-bit version of Windows 7, Intel® core™ i7-5960X CPU @ 3.00 GHz and DDR4 32 GB memory).

The extracted feature set distribution of variety images from the proposed variation-based MF detector is shown in Fig. 1. The JPEG (QF=90) is very similar to the original image, but the rest of the images are quite different from each other. Figures 2 and 3 show the variations of the gradient difference and the coefficient difference of the neighboring line pairs by Eqs. (10) and (13), respectively.

Fig. 1

Distribution of the features extracted from different types of sample images: the group images (a) A, (b) B, (c) and C.

JEI_25_5_053039_f001.png

Fig. 2

Variations of the gradients differences.

JEI_25_5_053039_f002.png

Fig. 3

Variations of the FT coefficient differences by Eq. (13).

JEI_25_5_053039_f003.png

In Fig. 4, receiver operating characteristic (ROC) curves show each performance of MFw versus the test image groups A, B, and C, on the MFR AR method. In this part, the MFR AR method shows the best performance of the MFD against the UP, while, the performances of the DN, JPG, and ORI are relatively lower.

Fig. 4

ROC curves of the MFR AR method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

JEI_25_5_053039_f004.png

In Fig. 5, ROC curves show each performance on the MFF method. In this part, the MFF method shows the best performance of the MFD in the JPEG and DN, but the performance is relatively low in UP.

Fig. 5

ROC curves of the MFF method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

JEI_25_5_053039_f005.png

In Fig. 6, the proposed method exhibits excellent performance on all MFw versus almost test image groups, except for MF5 versus ORI. It conducted performance evaluation and theoretical analysis for the MFD in the various altered image types.

Fig. 6

ROC curves of the proposed variation-based MFD method. (a) MF3 versus test images of group A, (b) MF5 versus test images of group A, (c) MF35 versus test images of group A, (d) MF3 versus test images of group B, (e) MF5 versus test images of group B, (f) MF35 versus test images of group B, (g) MF3 versus test images of group C, (h) MF5 versus test images of group C, and (i) MF35 versus test images of group C.

JEI_25_5_053039_f006.png

Table 1 shows experimental results of MFw and the test image groups A, B, and C, which presented as (a), (b), and (c), respectively. In this table, there are four kinds of test terms: AUC, the classification ratio, Pe, and PTP at PFP=0.01. As a result, the AUC and the classification ratio are both above 0.9. Pe ranges from 0.003 to 0.027, and PTP at PFP=0.01 ranges from 0.965 to 0.996.

Table 1

Performance comparison between the MFR AR, MFF, and the proposed method. (The best result for each training–testing pair is displayed in bold type as a whole.) No. (experimental result item) 1: AUC, 2: classification ratio, 3: Pe, 4: PTP at PFP=0.01.

(a)
MFD methodMFwNo.Test images: group A
ORIJPG90DN0.6UP1.5UP2.0
Proposed 19-DMF311.0000.9851.0001.0000.994
21.0000.8901.0000.9990.953
30.0000.0230.0000.0010.011
41.0000.9711.0000.9990.988
MF510.9890.9870.9900.9920.990
20.9140.8650.9540.9270.919
30.0170.0220.0140.0170.023
40.9760.9710.9810.9810.967
MF3511.0000.9871.0000.9960.988
21.0000.8961.0000.9960.907
30.0000.0220.0000.0060.023
41.0000.9751.0000.9940.976
MFR AR 10-DMF310.8540.8680.9120.9990.971
20.1370.1450.2830.9800.557
30.2020.1870.1390.0080.079
40.7140.6560.7290.9860.878
MF510.9340.9590.9010.9940.899
20.3650.3610.3850.8730.216
30.1180.0830.1400.0280.158
40.8070.8850.7000.9570.750
MF3510.8570.8890.8540.9960.919
20.1800.1830.2500.9080.345
30.2040.1780.1930.0250.149
40.7560.7520.5920.9650.800
MFF 44-DMF311.0001.0001.0000.9960.985
21.0001.0001.0000.8970.743
30.0000.0000.0000.0130.031
40.9980.9970.9990.9400.923
MF511.0001.0001.0000.9920.988
21.0001.0001.0000.8450.803
30.0000.0000.0000.180.027
40.9980.9970.9990.9940.929
MF3511.0001.0001.0000.9970.988
21.0001.0001.0000.9560.813
30.0000.0000.0000.0130.027
40.9980.9970.9990.9380.927
(b)
MFD methodMFwNo.Test images: group B
MF3+DN0.9MF3+DN0.6MF3+UP1.1MF3+UP1.5MF3+JPG90
Proposed 19-DMF310.9920.9991.0001.0000.996
20.9750.9811.0000.9990.968
30.0090.0020.0000.0000.008
40.9880.9961.0001.0000.985
MF510.9980.9940.9910.9950.988
20.9750.9550.9690.9700.939
30.0040.0120.0120.0080.017
40.9960.9850.9900.9920.980
MF3510.9970.9980.9960.9931.000
20.9740.9710.9670.9540.993
30.0040.0040.0070.0140.003
40.9980.9940.9900.9880.996
MFR AR 10-DMF310.8990.9190.9430.9970.894
20.2190.3220.3830.9520.181
30.1700.1310.1150.0210.173
40.7200.7570.8220.9550.712
MF510.8850.9210.9030.9670.971
20.1190.3700.2060.5460.877
30.1770.1340.1550.0860.073
40.7480.7840.8000.8780.880
MF3510.8470.8600.8790.9800.896
20.1050.2940.2180.7060.262
30.2160.1970.1820.0660.180
40.6860.6460.7340.9250.729
MFF 44-DMF311.0001.0000.9950.9921.000
21.0001.0000.8590.7541.000
30.0000.0000.0130.0170.000
40.9950.9970.9720.9560.998
MF511.0001.0000.9950.9931.000
21.0001.0000.8930.8021.000
30.0000.0000.0140.0170.000
40.9950.9960.9670.9500.997
MF3511.0001.0000.9950.9941.000
20.9991.0000.8710.8581.000
30.0010.0000.0150.0150.000
40.9930.9970.9680.9640.997
(c)
MFD methodMFwNo.Test images: group C
MF5+DN0.9MF5+DN0.6MF5+UP1.1MF5+UP1.5MF5+JPG90
Proposed 19-DMF311.0000.9980.9850.9960.991
21.0000.9230.9380.9750.965
30.0000.0190.0240.0100.013
41.0000.9790.9800.9840.992
MF510.9960.9940.9970.9940.999
20.9730.9520.9840.9560.991
30.0060.0100.0040.0140.003
40.9960.9860.9940.9810.998
MF3510.9991.0000.9970.9950.995
20.9801.0000.9890.9520.956
30.0010.0000.0050.0100.010
40.9991.0000.9890.9870.991
MFR AR 10-DMF310.9670.8120.9730.9970.964
20.5460.1310.5570.9550.487
30.0820.2350.0690.0170.084
40.8500.5750.8530.9680.831
MF510.9470.9220.9230.9930.962
20.4860.2900.3400.8400.546
30.1110.1300.1400.0330.094
40.8700.7920.7930.9290.844
MF3510.9330.8120.9300.9930.942
20.4080.1470.3270.8430.386
30.1340.2480.1330.0290.119
40.8180.6120.8110.9160.790
MFF 44-DMF310.8890.9800.7570.7880.976
20.0760.6250.0540.0620.558
30.1170.0260.2410.2210.034
40.7350.9310.5350.4700.907
MF510.9920.9990.9740.9861.000
20.8510.9990.7230.8280.997
30.0280.0020.0530.0390.003
40.9320.9970.8970.8840.992
MF3510.9950.9990.9760.9831.000
20.8960.9990.7370.8400.998
30.0190.0020.0520.0430.002
40.9400.9950.8840.8940.993

Moreover, the ROC curves of the proposed method for the many types of the test image are relatively closer to each other, which indicates the more consistent classification performance of the proposed method. Overall, the performance is excellent at unaltered (original), JPEG (QF=90) compression, downscaling (0.6), and upscaling (1.5 and 2.0) images on the MF3, the MF5, and the MF35 detections. However, in the proposed variation-based MFD methods, despite the 19-D short length of the feature vector, the performance results of the AUCs approached 1. Thus, it is confirmed that the grade evaluation of the proposed algorithm is rated as “Excellent (A).” [The classified rate of the experimental AUC results is interpreted using the traditional academic point system11 (June 2016).] In this evaluation, it uses the terms of general interpretation AUC for each training–testing pair.

Subsequently, the testing of the MF to detect low-resolution images will be examined. A small image window size is a requirement for detecting forgeries in a median-filtered image or modified JPEG pre- or/and postcompression. An example of a cut-and-paste forgery image is shown in Fig. 7. An unaltered image (window) is cut, and a median-filtered image (house) is pasted onto the cut area (white region) of the unaltered image (those unaltered images come from the BOWS2 database), forming a composite image, which was then JPEG postcompressed using a quality factor of 90, rotated counterclockwise by 5 deg and added salt and pepper noise by 0.05 density. Figures 8Fig. 910 show the detection blocks of MF with the MFR AR, the MFF, and the proposed method, respectively. The detected blocks that are median-filtered (the true positives) are marked in red, and the remaining blocks are marked in blue (the false alarms). (The color version of the paper is available online.) In Figs. 8Fig. 910, the left column (a, c, e, and g) is examined in a 32×32 block size, and the right column (b, d, f, and h) is examined in a 64×64 block size. The first row (a and b) shows the detection results in MF3 versus unaltered images, the second row (c and d) shows the detection results in MF3 + JPG90 versus JPG90 images, the third row (e and f) shows the detection results in MF3 versus unaltered to rotated images, and the last row (g and h) shows the detection results in MF3 versus unaltered to noisy images.

Fig. 7

Cut and paste forgery image example.

JEI_25_5_053039_f007.png

Fig. 8

Local MFD results using the MFR AR method.

JEI_25_5_053039_f008.png

Fig. 9

Local MFD results using the MFF method.

JEI_25_5_053039_f009.png

Fig. 10

Local MFD results using the proposed method.

JEI_25_5_053039_f010.png

In Fig. 8, the MFR AR method does not perform well for a 32×32 size image, and it performs only slightly better for MF3 versus unaltered images on a 64×64 size image. In Fig. 9, the MFF method performed well for MF3 versus unaltered images and rotated one for both 32×32 and 64×64 size images. Meanwhile, the corresponding forensic detection does not provide good results in JPEG postcompression. In Fig. 10, the MFD of the proposed method with a 19-D feature vector is supreme for MF3 versus unaltered images, under JPEG postcompression, and its rotated and noisy one for both 32×32 and 64×64 size images, respectively.

5.

Conclusions

This paper proposed a variation-based MFD method, the constructed feature vector that was composed of two kinds of variations from the space and the frequency domain in an image. The extracted one is computed on the gradient differences between the neighboring line pairs, and the other is computed on the FT coefficient differences.

All of that increases the experimental results in the MFD. To the best of our knowledge, this is the first complete solution for the variation between the neighboring line pairs of a digital image. So this will serve as additional research content for MFD. Future work should consider a performance evaluation of the smaller size as an altered image. Finally, the proposed variation-based method can be applied to solve different forensic problems, such as the previous MFD methods.

Acknowledgments

This work has been supported by the research grant (322386) of Chosun University, Republic of Korea, in 2015.

References

1. 

K. H. Rhee, “Median filtering detection using variation of neighboring line pairs for image forensic,” in IEEE 5th Int. Conf. on Consumer Electronics-Berlin (ICCE-Berlin), 103 –107 (2015). http://dx.doi.org/10.1109/ICCE-Berlin.2015.7391206 Google Scholar

2. 

X. Kang et al., “Robust median filtering forensics using an autoregressive model,” IEEE Trans. Inf. Forensics Secur., 8 (9), 1456 –1468 (2013). http://dx.doi.org/10.1109/TIFS.2013.2273394 Google Scholar

3. 

H. Yuan, “Blind forensics of median filtering in digital images,” IEEE Trans. Inf. Forensics Secur., 6 (4), 1335 –1345 (2011). http://dx.doi.org/10.1109/TIFS.2011.2161761 Google Scholar

4. 

T. Pevný, P. Bas and J. Fridrich, “Steganalysis by subtractive pixel adjacency matrix,” IEEE Trans. Inf. Forensics Secur., 5 (2), 215 –224 (2010). http://dx.doi.org/10.1109/TIFS.2010.2045842 Google Scholar

5. 

Y. Zhang et al., “Revealing the traces of median filtering using high-order local ternary patterns,” Signal Process. Lett. IEEE, 21 (3), 275 –279 (2014). http://dx.doi.org/10.1109/LSP.2013.2295858 Google Scholar

6. 

S. M. Kay, Modern Spectral Estimation: Theory and Application, Prentice-Hall, Englewood Cliffs, New Jersey (1998). Google Scholar

7. 

C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” (2016) https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ April ). 2016). Google Scholar

8. 

, “Break our watermarking system,” (2010) http://bows2.ec-lille.fr/ April 2016). Google Scholar

9. 

G. Schaefer and M. Stich, “UCID—an uncompressed color image database,” Proc. SPIE, 5307 472 –480 (2004). http://dx.doi.org/10.1117/12.525375 PSISDG 0277-786X Google Scholar

10. 

Q. Liu and Z. Chen, “Seam-carving image database,” (2014) http://www.shsu.edu/~qxl005/New/Downloads/index.html April 2016). Google Scholar

11. 

T. G. Tape, “The area under an ROC Curve,” (2016) http://gim.unmc.edu/dxtests/roc3.htm April ). 2016). Google Scholar

Biography

Kang Hyeon Rhee is with the Department of Electronics Engineering, Chosun University, Gwangju, Republic of Korea. His current research interests include embedded system design related to multimedia fingerprinting/forensics. He is on the Committee of the LSI Design Contest in Okinawa, Japan. He is also the recipient of awards such as the Haedong Prize from the Haedong Science and Culture Juridical Foundation, Korea, which he received in 2002 and 2009.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Kang Hyeon Rhee "Median filtering detection using variation of neighboring line pairs for image forensics," Journal of Electronic Imaging 25(5), 053039 (27 October 2016). https://doi.org/10.1117/1.JEI.25.5.053039
Published: 27 October 2016
Lens.org Logo
CITATIONS
Cited by 13 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image forensics

Digital filtering

Image filtering

Autoregressive models

Fourier transforms

Feature extraction

Sensors

Back to Top