A deep learning approach to detection of splicing and copy-move forgeries in images

2023-10-31 19:40

本文主要是介绍A deep learning approach to detection of splicing and copy-move forgeries in images,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

https://github.com/kPsarakis/Image-Forgery-Detection-CNNhttps://github.com/kPsarakis/Image-Forgery-Detection-CNN

        代码是结合代尔夫特理工大学的deep learning这门课的大作业来讲的。整体上一个分类的框架,但是用了srm噪声提取器,这个后来被RGBN作为双流fasterrcnn中的噪声流支路引用,srm在篡改检测中是比较常见的人工设计的算子。这篇文章的代码我跑过,本身还是有效果的,其中它里面数据预训练的分patch操作也在后续中被引进,但说实话,这种简单的网络设计其实跑不过不做任何设计的分类模型,比如res2net。篡改检测常用的基本都是自然场景数据集,比如CASIA1/2,中科院出的数据集,还有BHSig60,COVERAGE,NC16等,这些数据集的篡改手段包括了resize,压缩啊这些操作,其实和常规的文档类的数据集的ps还是有差别的,我们常说自然场景和文档篡改的差别挺大的,文档主要还是以ps这种操作为主。

上面这张图是网络设计,核心是第一组蓝色的cnn操作,这里面融合例如srm滤波器。srm滤波器最早出自于Rich models for steganalysis of digital images中,是Steganalysis Rich Model的缩写,富隐写分析模型,篡改操作会带来十分尖锐的边缘,尤其是拼接操作,会被高通滤波器很明显的体现出来。文章在cnn的结构里对第一层卷积的权重使用30个在srm中九三残差图的基本高通滤波器进行初始化,这些基础的滤波器对应了7个srm残差类别,分布如下:

个数滤波器类别
81st
42nd
83rd
1SQUARE3x3
4EDGE3x3
1SQUARE5x5
4EDGE3x3
from typing import Dictimport numpy as np
from torch import Tensor, stackdef get_filters():"""Function that return the required high pass SRM filters for the first convolutional layer of our implementation:return: A pytorch Tensor containing the 30x3x5x5 filter tensor with type[number_of_filters, input_channels, height, width]"""filters: Dict[str, Tensor] = {}# 1st Orderfilters["1O1"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, -1, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["1O2"] = Tensor(np.rot90(filters["1O1"]).copy())filters["1O3"] = Tensor(np.rot90(filters["1O2"]).copy())filters["1O4"] = Tensor(np.rot90(filters["1O3"]).copy())filters["1O5"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 0, -1, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["1O6"] = Tensor(np.rot90(filters["1O5"]).copy())filters["1O7"] = Tensor(np.rot90(filters["1O6"]).copy())filters["1O8"] = Tensor(np.rot90(filters["1O7"]).copy())# 2nd Orderfilters["2O1"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 1, -2, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["2O2"] = Tensor(np.rot90(filters["2O1"]).copy())filters["2O3"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, -2, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 0]]))filters["2O4"] = Tensor(np.rot90(filters["2O3"]).copy())# 3rd Orderfilters["3O1"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 1, -3, 1, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["3O2"] = Tensor(np.rot90(filters["3O1"]).copy())filters["3O3"] = Tensor(np.rot90(filters["3O2"]).copy())filters["3O4"] = Tensor(np.rot90(filters["3O3"]).copy())filters["3O5"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, 1, 0, 1, 0], [0, 0, -3, 0, 0], [0, 1, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["3O6"] = Tensor(np.rot90(filters["3O5"]).copy())filters["3O7"] = Tensor(np.rot90(filters["3O6"]).copy())filters["3O8"] = Tensor(np.rot90(filters["3O7"]).copy())# 3x3 SQUAREfilters["3x3S"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, -1, 2, -1, 0], [0, 2, -4, 2, 0], [0, -1, 2, -1, 0], [0, 0, 0, 0, 0]]))# 3x3 EDGEfilters["3x3E1"] = Tensor(np.array([[0, 0, 0, 0, 0], [0, -1, 2, -1, 0], [0, 2, -4, 2, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["3x3E2"] = Tensor(np.rot90(filters["3x3E1"]).copy())filters["3x3E3"] = Tensor(np.rot90(filters["3x3E2"]).copy())filters["3x3E4"] = Tensor(np.rot90(filters["3x3E3"]).copy())# 5X5 EDGEfilters["5x5E1"] = Tensor(np.array([[-1, 2, -2, 2, -1], [2, -6, 8, -6, 2], [-2, 8, -12, 8, -2], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]))filters["5x5E2"] = Tensor(np.rot90(filters["5x5E1"]).copy())filters["5x5E3"] = Tensor(np.rot90(filters["5x5E2"]).copy())filters["5x5E4"] = Tensor(np.rot90(filters["5x5E3"]).copy())# 5x5 SQUAREfilters["5x5S"] = Tensor(np.array([[-1, 2, -2, 2, -1], [2, -6, 8, -6, 2], [-2, 8, -12, 8, -2], [2, -6, 8, -6, 2], [-1, 2, -2, 2, -1]]))return vectorize_filters(filters)def vectorize_filters(filters: dict):"""Function that takes as input the 30x5x5 different SRM high pass filters and creates the 30x3x5x5 tensor with thefollowing permutations 𝑾𝑗 = [𝑊3𝑘−2 𝑊3𝑘−1 𝑊3𝑘] where 𝑘 = ((𝑗 − 1) mod 10) + 1 and (𝑗 = 1, ⋅ ⋅ ⋅ , 30).:arg filters: The 30 SRM high pass filters:return: Returns the 30x3x5x5 filter tensor of the type [number_of_filters, input_channels, height, width]"""tensor_list = []w = list(filters.values())for i in range(1, 31):tmp = []k = ((i - 1) % 10) + 1tmp.append(w[3 * k - 3])tmp.append(w[3 * k - 2])tmp.append(w[3 * k - 1])tensor_list.append(stack(tmp))return stack(tensor_list)if __name__ == '__main__':print(get_filters().shape)pass

在RGBN中只采用了三组。

class CNN(nn.Module):"""The convolutional neural network (CNN) class"""def __init__(self):"""Initialization of all the layers in the network."""super(CNN, self).__init__()self.conv0 = nn.Conv2d(3, 3, kernel_size=5, stride=1, padding=0)nn.init.xavier_uniform_(self.conv0.weight)self.conv1 = nn.Conv2d(3, 30, kernel_size=5, stride=2, padding=0)self.conv1.weight = nn.Parameter(get_filters())self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)self.conv2 = nn.Conv2d(30, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv2.weight)self.conv3 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv3.weight)self.conv4 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv4.weight)self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)self.conv5 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv5.weight)self.conv6 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv6.weight)self.conv7 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv7.weight)self.conv8 = nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=0)nn.init.xavier_uniform_(self.conv8.weight)self.fc = nn.Linear(16 * 17 * 17, 2)self.drop1 = nn.Dropout(p=0.5)  # used only for the NC datasetdef forward(self, x):"""The forward step of the network that consumes an image patch and either uses a fully connected layer in thetraining phase with a softmax or just returns the feature map after the final convolutional layer.:returns: Either the output of the softmax during training or the 400-D feature representation at testing"""x = f.relu(self.conv0(x))x = f.relu(self.conv1(x))lrn = nn.LocalResponseNorm(3)x = lrn(x)x = self.pool1(x)x = f.relu(self.conv2(x))x = f.relu(self.conv3(x))x = f.relu(self.conv4(x))x = f.relu(self.conv5(x))x = lrn(x)x = self.pool2(x)x = f.relu(self.conv6(x))x = f.relu(self.conv7(x))x = f.relu(self.conv8(x))x = x.view(-1, 16 * 17 * 17)# In the training phase we also need the fully connected layer with softmaxif self.training:# x = self.drop1(x) # used only for the NC datasetx = f.relu(self.fc(x))x = f.softmax(x, dim=1)return xif __name__ == '__main__':cnn1 = CNN()print(cnn1)

网络结构很普通,没什么好说的。

还有一点就是输入,CASIA中有真假两类样本,假样本有mask,真样本没有,真假样本也不是严格成对的,其实我做文档篡改检测的时候,有时候特别希望能够拿到成对的篡改检测样本,这样的话可用孪生网络去做,类似于笔迹鉴别的思路去搞。CASIA2中假样本其实有和真样本成对的数据,分patch操作就是利用假样本的mask去真样本假样本中分别做裁剪,其中假样本裁出来的128x128作为假样本,真样本中128x128作为真样本。如果你有成对数据,这是一个很好的数据预处理手段。我们自己在做文档篡改数据时,也借鉴了CASIA2做数据的方式。

这篇关于A deep learning approach to detection of splicing and copy-move forgeries in images的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/317680

相关文章

Unity3D 运动之Move函数和translate

CharacterController.Move 移动 function Move (motion : Vector3) : CollisionFlags Description描述 A more complex move function taking absolute movement deltas. 一个更加复杂的运动函数,每次都绝对运动。 Attempts to

12C 新特性,MOVE DATAFILE 在线移动 包括system, 附带改名 NID ,cdb_data_files视图坏了

ALTER DATABASE MOVE DATAFILE  可以改名 可以move file,全部一个命令。 resue 可以重用,keep好像不生效!!! system照移动不误-------- SQL> select file_name, status, online_status from dba_data_files where tablespace_name='SYSTEM'

时间序列|change point detection

change point detection 被称为变点检测,其基本定义是在一个序列或过程中,当某个统计特性(分布类型、分布参数)在某时间点受系统性因素而非偶然因素影响发生变化,我们就称该时间点为变点。变点识别即利用统计量或统计方法或机器学习方法将该变点位置估计出来。 Change Point Detection的类型 online 指连续观察某一随机过程,监测到变点时停止检验,不运用到

简单的Q-learning|小明的一维世界(3)

简单的Q-learning|小明的一维世界(1) 简单的Q-learning|小明的一维世界(2) 一维的加速度世界 这个世界,小明只能控制自己的加速度,并且只能对加速度进行如下三种操作:增加1、减少1、或者不变。所以行动空间为: { u 1 = − 1 , u 2 = 0 , u 3 = 1 } \{u_1=-1, u_2=0, u_3=1\} {u1​=−1,u2​=0,u3​=1}

简单的Q-learning|小明的一维世界(2)

上篇介绍了小明的一维世界模型 、Q-learning的状态空间、行动空间、奖励函数、Q-table、Q table更新公式、以及从Q值导出策略的公式等。最后给出最简单的一维位置世界的Q-learning例子,从给出其状态空间、行动空间、以及稠密与稀疏两种奖励函数的设置方式。下面将继续深入,GO! 一维的速度世界 这个世界,小明只能控制自己的速度,并且只能对速度进行如下三种操作:增加1、减

Deep Ocr

1.圈出内容,文本那里要有内容.然后你保存,并'导出数据集'. 2.找出deep_ocr_recognition_training_workflow.hdev 文件.修改“DatasetFilename := 'Test.hdict'” 310行 write_deep_ocr (DeepOcrHandle, BestModelDeepOCRFilename) 3.推理test.hdev

MACS bdgdiff: Differential peak detection based on paired four bedGraph files.

参考原文地址:[http://manpages.ubuntu.com/manpages/xenial/man1/macs2_bdgdiff.1.html](http://manpages.ubuntu.com/manpages/xenial/man1/macs2_bdgdiff.1.html) 文章目录 一、MACS bdgdiff 简介DESCRIPTION 二、用法

Java 入门指南:Java 并发编程 —— Copy-On-Write 写时复制技术

文章目录 Copy-On-Write使用场景特点缺点CopyOnWrite 和 读写锁相同点之处不同之处 CopyOnWriteArrayList适用场景主要特性方法构造方法CopyOnWriteArrayList 使用示例 CopyOnWriteArraySet适用场景主要特性方法构造方法使用注意事项CopyOnWriteArraySet 使用示例 Copy-On-Writ

docker images

docker 装好docker之后,先掌握一下docker启动与停止 docker启动关闭状态 systemctl 命令是系统服务管理器指令,它是 service 和 chkconfig 两个命令组合。 查看 docker 的启动状态 systemctl status docker 关闭 docker systemctl stop docker 启动 docker syste

【Python报错已解决】`AttributeError: move_to requires a WebElement`

🎬 鸽芷咕:个人主页  🔥 个人专栏: 《C++干货基地》《粉丝福利》 ⛺️生活的理想,就是为了理想的生活! 文章目录 引言:一、问题描述:1.1 报错示例:1.2 报错分析:1.3 解决思路: 二、解决方法:2.1 方法一:正确获取并传递WebElement对象2.2 步骤二:使用find_element_by_*方法直接获取元素 三、其