This paper presents an improved approach for detecting copy-move forgery based on singular value decomposition (SVD). It is a block-based method where the image is scanned from left to right and top to down by a sliding window with a determined size. At each step, the SVD is determined. First, the diagonal matrix’s maximum value (norm) is selected (representing the scaling factor for SVD and a fixed value for each set of matrix elements even when rotating the matrix or scaled). Then, the similar norms are grouped, and each leading group is separated into many subgroups (elements of each subgroup are neighbors) according to 8-adjacency (the subgroups for each leading group must be far from others by a specific distance). After that, a weight is assigned for each subgroup to classify the image as forgery or not. Finally, the F1 score of the proposed system is measured, reaching 99.1%. This approach is robust against rotation, scaling, noisy images, and illumination variation. It is compared with other similar methods and presents very promised results.

Nowadays, digital images have become part of daily life due to the high growth of technology and the development of the internet. This led to easy access to images and multimedia, processing them, transferring and quickly altering image details, and developing tampered images [

Editing and changing digital images become easier for anyone with the advancement of various low-cost hardware and software tools such as Adobe Photoshop, Corel draw, etc. As shown in

With the extensive use and transmission of digital images, authentication becomes very important to increase the trust in the images [

Three types of passive image forgery can be made for digital images.

Image retouching focuses on reducing or enhancing some of the digital image features that make it a less harmful type of image forgery.

Splicing is implemented by combining two or more images, which hides some critical information from the original image [

Copy-move forgery (CMF) is the most popular type. It includes copying part of the image and then pasting this part into another region of the image. Of course, image editing can be made along with this operation, such as scaling, rotation, adding noise, etc. Unfortunately, it is not easy to detect this type of image forgery. In general, passive forgery detection can be categorized into block-based and keypoint-based methods [

In this paper, we look to suggest a robust algorithm to detect copy-move forgery (also known as region duplication attack).

The primary approach to detecting the same regions is block-based [

We will use a CASIA [

The rest of the paper is organized as follows: Section two presents the related works, while Section three focuses on the definition of singular value decomposition. In Section four, the proposed method methodology will be introduced. Then Section five specified results and discussion. Finally, the conclusion will be summarized in Section six.

(Amerini et al., 2013) suggested a method for the detection of image forgery based on SIFT. Features extracted from images of the dataset MICC-F220 and MICC-F2000. This method deal with the rotation and scaling of the forged object. The False Positive Rate (FPR) and True Positive Rate (TPR) achieved are 8% and 100%, respectively [

(Uliyan et al., 2016) introduced a forgery detection method based on combined features of Hessian points and a centrosymmetric local binary pattern (CSLBP). The advantage of this method is invariant to scale, translation, and illumination, but on the other side, it is not invariant when the image is degraded by blur. The false-positive and true-positive rates are 8% and 92%, respectively [

(Alkawaz et al., 2018) proposed image forgery detection approach based on discrete cosine transform (DCT). This method works with gray images. The input image is divided into several overlapping blocks. For each block, the 2D DCT coefficient is determined. A feature vector is generated using zig-zag scanning in every block and sorted by lexicographic. Finally, the Euclidean distance is used to locate duplicate blocks [

(Huang et al., 2019) suggested a method to detect copy-move forgery based on superpixel segmentation and Helmert transformation. The first step of this method used the SIFT algorithm to extract the key points and their descriptors. Matching pairs is achieved by determining the similarity of key points based on the descriptor. These pairs are grouped based on the spatial distance by Helmert transformation to get coarse forgery regions. The isolated area will be removed. The advantage of this method is the robustness for rotation, scaling, and compression. The best recall and precision were 80% and 82%, respectively [

(Ali et al., 2022) Proposed a system based on deep learning to detect forgery images in the context of double image compression. Training the proposed model based on the difference between the original and recompressed images. The performance of this model was 92.23% as a validation accuracy [

In linear algebra, SVD is a critical topic by many famous mathematicians, which is a factoring of a true matrix and is utilized to elicit geometric and algebraic features of the image. SVD is a stable transformation for scaling and rotation invariance, it is a vigorous and trustworthy orthogonal matrix decomposition method, so it has become popular in many practical applications in signal and image processing and statistics [

^{T} is an

We can visualize this decomposition in

This proposal suggested a robust algorithm for detecting copy-move forger images based on the SVD transformation. The primary step to solving the problem of detecting copy-move forger is listed in Algorithm 1.

The input image for this system is a color image. First, preprocessing for the input image is implemented, which includes (converting the RGB image into a gray image, image denoising by using a deep neural network (CNN), and resizing the image (most of the images in the dataset have a size of 256 × 384, so every image with a size larger or less than this size will be normalized to (256 × 384)).

A sliding window with size 11 × 11 will be scanned on the entire image from left to right and top to bottom. At each step, the SVD transformation will be calculated. The result of the SVD process will be three matrices. We are concerned with the diagonal matrix. The largest value of the diagonal matrix at each step is selected, called (the norm value), and stored in a matrix (has norm values and locations of a norm).

After scanning the entire image, the norm vector will be sorted in ascending order. This vector is divided into many groups (each group has a similar norm value), and groups with less than four pixels will be ignored.

We try to find the connected norms for each group according to the 8-adjacency. This process divides the group into several sub-groups (there is no connection between each sub-group with other sub-groups of the same norm). Each subgroup with less than three norms will be ignored. A weight for a group will be created according to the sub-group size (weight increase by one for every two subgroups has the same size or different by one).

Finally, two arrays will be created. The first includes the norm value and the corresponding weights, while the second consists of the norm value with the corresponding number of subgroups (left in the group). The two arrays are sorted in ascending order according to the weight and number of the subgroup. We select the first (highest) six norms of the first array (weight array) and the first ten norms of the second array (subgroup array).

Image classified as a forgery image if achieved the following condition:

“If the subgroup number in the selected norm is more than seven, and selected weights are greater than fifteen or less than nine.”

Else it is classified as a non-forgery image.

Several tested experiments are conducted to assess the proposed schemes in this section. MATLAB 2018b is used to carry out the experiments. First, the approach was tested on 500 color stander images from the CASIA and CoMoFoD datasets. Animals, plants, structures, and some types of meals are among the original authentic images from two databases,

Precision as

_{P} is the true positive, F_{P} is a false positive, and F_{N} is a false negative.

The first test focuses on evaluating the performance of the suggested system when using different sliding window sizes. Forgery and non-forgery images are fed from the datasets into the system one by one, and the system’s performance is assessed. The best performance is achieved when the block size of the sliding window equals 11 × 11, as shown in

Block size | Precision | Recall | F1 |
---|---|---|---|

5 × 5 | 78.57 | 94.29 | 85.71 |

7 × 7 | 74.16 | 94.29 | 83.02 |

9 × 9 | 85.90 | 95.71 | 90.54 |

11 × 11 | 99.1 | 99.1 | 99.1 |

13 × 13 | 80.00 | 91.43 | 85.33 |

15 × 15 | 85.71 | 94.29 | 89.80 |

17 × 17 | 80.00 | 85.71 | 82.76 |

The input image size has high effects on the performance of the proposed algorithm, so we test how the image size affects the results in the proposed algorithm and what is the best image size that can be used to get optimum accuracy.

Also, the current proposal is tested when the duplicated object is rotated with different angles ranging from 20 degrees to 180 degrees. The testing results are tabulated in

Angle | Precision | Recall | F1 score |
---|---|---|---|

20 | 100 | 100 | 100 |

40 | 100 | 100 | 100 |

60 | 100 | 100 | 100 |

80 | 100 | 100 | 100 |

100 | 100 | 100 | 100 |

120 | 100 | 100 | 100 |

140 | 100 | 100 | 100 |

160 | 100 | 100 | 100 |

180 | 100 | 100 | 100 |

References | Method | F1-score |
---|---|---|

Kafali et al. (2021) [ |
ResNet/VBI | 51.6 |

Zhu et al. (2020) [ |
AR-Net/ASPP | 45.52 |

Islam et al. (2020) [ |
GAN | 96.48 |

Zhong et al. (2019) [ |
Dense-InceptionNet | 78.18 |

Ashraf et al. (2020) [ |
DWT | 97.52 |

Ahmed et al. (2020) [ |
SVD/KS | 95.9 |

Rathore et al. (2021) [ |
SVD/BWT | 92.24 |

Cozzolino et al. (2014) [ |
DWT | 94.7 |

Cozzolino et al. (2015) [ |
dense-field | 94.26 |

Li et al. (2015) [ |
SIFT-RANSAC | 94.98 |

Wang et al. (2018) [ |
Rg2NN | 96.8 |

Wang et al. (2017) [ |
Rg2NN | 96.1 |

Meena et al. (2020) [ |
FMT-SIFT | 96.97 |

Singh et al. (2020) [ |
SVD/DCT | 86.81 |

Note: Abbreviations: Volterra-Based Inception (VBI), Atrous Spatial Pyramid Pooling (ASPP), Generative Adversarial Network (GAN), Discrete Wavelet Transform (DWT), Kolmogorov Smirnov (KS), Biorthogonal Wavelet Transform (BWT), Fourier-Mellin Transform (FMT), Discrete Cosine Transform (DCT).

The other performance testing of the proposed method is when the duplicated object is scaled. A sample of images when scaling forged objects is shown in

For further analysis of the robustness of the proposed algorithm, we implemented various experiments on the forgery images where it is contaminated with noise. First, Gaussian noise is inserted in the tested images. Then, gaussian noises were added randomly, with zero mean and standard deviations of 0.2, 0.4, 0.6, and 0.8. In this test, 300 images were used. The F1-score accuracies are shown in

The result of detecting forgery images contaminated with noise is compared with other works, as shown in

Also, the proposed method was tested with various image illumination, and we found that the F1-score for detecting forgery images was up to 99.1%. A sample of the image used in this test is shown in

The elapsed time of the CPU for the detection of the forgery image was 18.5 s.

Finally, the suggested method results are compared with several similar methods, as illustrated in

This paper proposed a new enhancement of the SVD for digital image forgery detection. First, we used CASIA and CoMoFoD datasets to evaluate the proposed method. Then, a preprocessing technique was applied to convert the input images from RGB to grayscale, and the images were resized into 256 × 384. Next, the image was divided into 11 × 11 overlapped blocks. After that, we used the SVD (diagonal array) to extract features from each block.

The norm’s value was also calculated for each block, finding similar norms. Finally, for each similar norms subgroups, the weight was calculated, and this led to deciding if the image was copy-move or not. The suggested method gives excellent results, and we proved it robust against rotation, scaling, noisy images, and illumination variation. Furthermore, when the introduced method is compared with other methods, it shows high performance, giving better results than most copy-move detection methods. For future work, we suggest using generalized singular value decomposition (GSVD) to detect the copy-move forgery, which has many features that need to be discovered.