【深度学习笔记2.3】VGG

2024-06-06 05:58
文章标签 学习 笔记 深度 2.3 vgg

本文主要是介绍【深度学习笔记2.3】VGG,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

vgg网络结构具体参见论文,网上也已经有很多资料了,这里不再赘述,这里主要记录下我在vgg训练代码和一些心得。

vgg16_1

代码示例如下(详见文献[2]vgg16_1.py):

import numpy as np
import cv2
import tensorflow as tf
from datetime import datetime
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_datadatapath = '/home/***/res/MNIST_data'
mnist_data_set = input_data.read_data_sets(datapath, validation_size=0, one_hot=True)num_classes = 10
learning_rate = 1e-4
training_epoch = 50
batch_size = 16
input_image_shape = (224, 224, 1)
conv_layer_trainable = Truedef image_shape_scale(batch_xs, input_image_shape):images = np.reshape(batch_xs, [batch_xs.shape[0], 28, 28])imlist = [][imlist.append(cv2.resize(img, input_image_shape[0:2])) for img in images]images = np.array(imlist)# cv2.imwrite('scale1.jpg', images[0]*200)# cv2.imwrite('scale2.jpg', images[1]*200)# batch_xs = np.reshape(images, [batch_xs.shape[0], 227 * 227 * input_image_channel])batch_xs = np.reshape(images, [batch_xs.shape[0], input_image_shape[0], input_image_shape[1], input_image_shape[2]])return batch_xsclass vgg16:def __init__(self, imgs):self.parameters = []self.imgs = imgsself.convlayers()self.fc_layers()self.probs = self.fc8def saver(self):return tf.train.Saver()def maxpool(self, name, input_data):out = tf.nn.max_pool(input_data, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME', name=name)return outdef conv(self, name, input_data, out_channel):in_channel = input_data.get_shape()[-1]with tf.variable_scope(name):kernel = tf.get_variable('weights', [3, 3, in_channel, out_channel], dtype=tf.float32, trainable=conv_layer_trainable)biases = tf.get_variable('biases', [out_channel], dtype=tf.float32, trainable=conv_layer_trainable)conv_res = tf.nn.conv2d(input_data, kernel, [1, 1, 1, 1], padding='SAME')res = tf.nn.bias_add(conv_res, biases)out = tf.nn.relu(res, name=name)self.parameters += [kernel, biases]return outdef fc(self, name, input_data, out_channel, is_output=False):shape = input_data.get_shape().as_list()if len(shape) == 4:size = shape[-1] * shape[-2] * shape[-3]else:size = shape[1]input_data_flat = tf.reshape(input_data, [-1, size])with tf.variable_scope(name):weights = tf.get_variable(name="weights", shape=[size, out_channel], dtype=tf.float32)biases = tf.get_variable(name="biases", shape=[out_channel], dtype=tf.float32)res = tf.matmul(input_data_flat, weights)out = tf.nn.bias_add(res, biases)if is_output is False:out = tf.nn.relu(out, name=name)self.parameters += [weights, biases]return outdef convlayers(self):# zero-mean input# conv1self.conv1_1 = self.conv("conv1_1", self.imgs, 64)self.conv1_2 = self.conv("conv1_2", self.conv1_1, 64)self.pool1 = self.maxpool("pool1", self.conv1_2)# conv2self.conv2_1 = self.conv("conv2_1", self.pool1, 128)self.conv2_2 = self.conv("conv2_2", self.conv2_1, 128)self.pool2 = self.maxpool("pool2", self.conv2_2)# conv3self.conv3_1 = self.conv("conv3_1", self.pool2, 256)self.conv3_2 = self.conv("conv3_2", self.conv3_1, 256)self.conv3_3 = self.conv("conv3_3", self.conv3_2, 256)self.pool3 = self.maxpool("pool3", self.conv3_3)# conv4self.conv4_1 = self.conv("conv4_1", self.pool3, 512)self.conv4_2 = self.conv("conv4_2", self.conv4_1, 512)self.conv4_3 = self.conv("conv4_3", self.conv4_2, 512)self.pool4 = self.maxpool("pool4", self.conv4_3)# conv5self.conv5_1 = self.conv("conv5_1", self.pool4, 512)self.conv5_2 = self.conv("conv5_2", self.conv5_1, 512)self.conv5_3 = self.conv("conv5_3", self.conv5_2, 512)self.pool5 = self.maxpool("pool5", self.conv5_3)def fc_layers(self):self.fc6 = self.fc("fc1", self.pool5, 4096)# self.fc6 = tf.nn.dropout(self.fc6, 0.5)self.fc7 = self.fc("fc2", self.fc6, 4096)# self.fc7 = tf.nn.dropout(self.fc7, 0.5)self.fc8 = self.fc("fc3", self.fc7, num_classes, is_output=True)def load_weights(self, weight_file, sess):weights = np.load(weight_file)keys = sorted(weights.keys())for i, k in enumerate(keys):sess.run(self.parameters[i].assign(weights[k]))print("-----------all done---------------")if __name__ == '__main__':X = tf.placeholder(tf.float32, [None, input_image_shape[0], input_image_shape[1], input_image_shape[2]])y = tf.placeholder(tf.float32, [None, num_classes])learning_rate_holder = tf.placeholder(tf.float32)vgg = vgg16(X)prob = vgg.probswith tf.name_scope("cross_ent"):y_output = tf.nn.softmax(prob)cross_entropy = -tf.reduce_sum(y * tf.log(y_output))loss = tf.reduce_mean(cross_entropy)# Train opwith tf.name_scope("train"):optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate_holder)train_op = optimizer.minimize(loss)# Evaluation op: Accuracy of the modelwith tf.name_scope("accuracy"):correct_pred = tf.equal(tf.argmax(y_output, 1), tf.argmax(y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))init = tf.global_variables_initializer()# init = tf.glorot_normal_initializer()  # failed, 也称之为 Xavier normal initializer. 参考文献[A]loss_buf = []accuracy_buf = []with tf.Session() as sess:sess.run(init)# Load the pretrained weights into the non-trainable layer# model.load_initial_weights(sess)total_batch = mnist_data_set.train.num_examples // batch_sizefor step in range(training_epoch):print("{} Epoch number: {}".format(datetime.now(), step + 1))tmp_loss = []for iteration in range(total_batch):batch_xs, batch_ys = mnist_data_set.train.next_batch(batch_size)batch_xs = image_shape_scale(batch_xs, input_image_shape)if step < 10:sess.run(train_op, feed_dict={X: batch_xs, y: batch_ys, learning_rate_holder: learning_rate})elif step < 20:sess.run(train_op, feed_dict={X: batch_xs, y: batch_ys, learning_rate_holder: learning_rate/10.0})elif step < 30:sess.run(train_op, feed_dict={X: batch_xs, y: batch_ys, learning_rate_holder: learning_rate/100.0})else:sess.run(train_op, feed_dict={X: batch_xs, y: batch_ys, learning_rate_holder: learning_rate/1000.0})if iteration % 50 == 0:loss_val = sess.run(loss, feed_dict={X: batch_xs, y: batch_ys})train_accuracy = sess.run(accuracy, feed_dict={X: batch_xs, y: batch_ys})print("step {}, iteration {}, loss {}, training accuracy {}".format(step, iteration, loss_val, train_accuracy))_loss_buf = []_accuracy_buf = []test_total_batch = mnist_data_set.test.num_examples // batch_sizefor iteration in range(test_total_batch):batch_xs, batch_ys = mnist_data_set.test.next_batch(batch_size)  # GPU内存不足,只好分批测试准确率batch_xs = image_shape_scale(batch_xs, input_image_shape)loss_val = sess.run(loss, feed_dict={X: batch_xs, y: batch_ys})test_accuracy = sess.run(accuracy, feed_dict={X: batch_xs, y: batch_ys})_loss_buf.append(loss_val)_accuracy_buf.append(test_accuracy)loss_val = np.array(_loss_buf).mean()test_accuracy = np.array(_accuracy_buf).mean()print("step {}, loss {}, testing accuracy {}".format(step, loss_val, test_accuracy))loss_buf.append(loss_val)accuracy_buf.append(test_accuracy)# 画出准确率曲线
accuracy_ndarray = np.array(accuracy_buf)
accuracy_size = np.arange(len(accuracy_ndarray))
plt.plot(accuracy_size, accuracy_ndarray, 'b+', label='accuracy')loss_ndarray = np.array(loss_buf)
loss_size = np.arange(len(loss_ndarray))
plt.plot(loss_size, loss_ndarray, 'r*', label='loss')plt.show()# 保存loss和测试准确率到csv文件
with open('VGGNet16.csv', 'w') as fid:for loss, acc in zip(loss_buf, accuracy_buf):strText = str(loss) + ',' + str(acc) + '\n'fid.write(strText)
fid.close()print('end')# 参考文献
# [A]:tensorflow参数初始化, https://blog.csdn.net/m0_37167788/article/details/79073070

训练测试结果打印如下:
step 1, loss 8.525571823120117, testing accuracy 0.9576321840286255
step 2, loss 3.323066234588623, testing accuracy 0.9828726053237915
… …
step 12, loss 2.2948389053344727, testing accuracy 0.9907852411270142
step 13, loss 2.297200918197632, testing accuracy 0.9905849099159241
step 14, loss nan, testing accuracy 0.09865785390138626
… …

使用dropout优化

关于dropout的理解与总结请参考文献[5].

  vgg16共有3个fc-layer,这里在前两个fc-layer后面都使用dropout,即对vgg16_1.py中的fc_layers做如下改进(此部分完整代码详见文献[2]vgg16_2.py):

    def fc_layers(self):self.fc6 = self.fc("fc1", self.pool5, 4096)self.fc6 = tf.nn.dropout(self.fc6, 0.5)self.fc7 = self.fc("fc2", self.fc6, 4096)self.fc7 = tf.nn.dropout(self.fc7, 0.5)self.fc8 = self.fc("fc3", self.fc7, num_classes, is_output=True)

其他代码不变,训练测试结果打印如下:
step 1, loss 146.34462, testing accuracy 0.1280048
step 2, loss 7.3694496, testing accuracy 0.9635417
step 3, loss 3.3861883, testing accuracy 0.9823718
… …
step 17, loss 2.1597393, testing accuracy 0.9910857
step 18, loss 2.096352, testing accuracy 0.99138623
step 19, loss nan, testing accuracy 0.098958336
… …

使用Batch Normalization优化

在vgg16_1.py的基础上做如下改动:(此部分代码详见vgg16_3.py)

# batch_norm定义同文献[3]
def batch_norm(inputs, is_training, is_conv_out=True, decay=0.999):scale = tf.Variable(tf.ones([inputs.get_shape()[-1]]))beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]]))pop_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable=False)pop_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable=False)if is_training:if is_conv_out:batch_mean, batch_var = tf.nn.moments(inputs, [0, 1, 2])else:batch_mean, batch_var = tf.nn.moments(inputs, [0])train_mean = tf.assign(pop_mean, pop_mean*decay+batch_mean*(1-decay))train_var = tf.assign(pop_var, pop_var*decay+batch_var*(1-decay))with tf.control_dependencies([train_mean, train_var]):return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, scale, 0.001)else:return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)class vgg16:... ...def conv(self, name, input_data, out_channel):... ...with tf.variable_scope(name):... ...res = tf.nn.bias_add(conv_res, biases)res = batch_norm(res, True)out = tf.nn.relu(res, name=name)self.parameters += [kernel, biases]return outdef fc(self, name, input_data, out_channel, is_output=False):... ...with tf.variable_scope(name):... ...out = tf.nn.bias_add(res, biases)if is_output is False:out = batch_norm(out, True, False)out = tf.nn.relu(out, name=name)self.parameters += [weights, biases]return out

其他代码不变,训练测试结果打印如下:
step 1, loss 2.2608318, testing accuracy 0.9890825
step 2, loss 1.5252224, testing accuracy 0.9931891
… …
step 49, loss 1.1548673, testing accuracy 0.99348956
step 50, loss 1.1780967, testing accuracy 0.9938902
end
可以看到,相比之前不适用BN,使用BN可以有效避免梯度爆炸。

使用权重衰减(Weight Decay)优化

权重衰减是什么?参考文献[4]
TODO: 在VGG中如何使用权重衰减

参考文献

[1] 王晓华. TensorFlow深度学习应用实践
[2] 我的handml仓库
[3]【深度学习笔记2.2】AlexNet
[4]【深度学习笔记3.1】权重衰减(weight decay)
[5]【深度学习笔记3.2】Dropout

这篇关于【深度学习笔记2.3】VGG的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1035272

相关文章

51单片机学习记录———定时器

文章目录 前言一、定时器介绍二、STC89C52定时器资源三、定时器框图四、定时器模式五、定时器相关寄存器六、定时器练习 前言 一个学习嵌入式的小白~ 有问题评论区或私信指出~ 提示:以下是本篇文章正文内容,下面案例可供参考 一、定时器介绍 定时器介绍:51单片机的定时器属于单片机的内部资源,其电路的连接和运转均在单片机内部完成。 定时器作用: 1.用于计数系统,可

问题:第一次世界大战的起止时间是 #其他#学习方法#微信

问题:第一次世界大战的起止时间是 A.1913 ~1918 年 B.1913 ~1918 年 C.1914 ~1918 年 D.1914 ~1919 年 参考答案如图所示

[word] word设置上标快捷键 #学习方法#其他#媒体

word设置上标快捷键 办公中,少不了使用word,这个是大家必备的软件,今天给大家分享word设置上标快捷键,希望在办公中能帮到您! 1、添加上标 在录入一些公式,或者是化学产品时,需要添加上标内容,按下快捷键Ctrl+shift++就能将需要的内容设置为上标符号。 word设置上标快捷键的方法就是以上内容了,需要的小伙伴都可以试一试呢!

Tolua使用笔记(上)

目录   1.准备工作 2.运行例子 01.HelloWorld:在C#中,创建和销毁Lua虚拟机 和 简单调用。 02.ScriptsFromFile:在C#中,对一个lua文件的执行调用 03.CallLuaFunction:在C#中,对lua函数的操作 04.AccessingLuaVariables:在C#中,对lua变量的操作 05.LuaCoroutine:在Lua中,

AssetBundle学习笔记

AssetBundle是unity自定义的资源格式,通过调用引擎的资源打包接口对资源进行打包成.assetbundle格式的资源包。本文介绍了AssetBundle的生成,使用,加载,卸载以及Unity资源更新的一个基本步骤。 目录 1.定义: 2.AssetBundle的生成: 1)设置AssetBundle包的属性——通过编辑器界面 补充:分组策略 2)调用引擎接口API

Javascript高级程序设计(第四版)--学习记录之变量、内存

原始值与引用值 原始值:简单的数据即基础数据类型,按值访问。 引用值:由多个值构成的对象即复杂数据类型,按引用访问。 动态属性 对于引用值而言,可以随时添加、修改和删除其属性和方法。 let person = new Object();person.name = 'Jason';person.age = 42;console.log(person.name,person.age);//'J

大学湖北中医药大学法医学试题及答案,分享几个实用搜题和学习工具 #微信#学习方法#职场发展

今天分享拥有拍照搜题、文字搜题、语音搜题、多重搜题等搜题模式,可以快速查找问题解析,加深对题目答案的理解。 1.快练题 这是一个网站 找题的网站海量题库,在线搜题,快速刷题~为您提供百万优质题库,直接搜索题库名称,支持多种刷题模式:顺序练习、语音听题、本地搜题、顺序阅读、模拟考试、组卷考试、赶快下载吧! 2.彩虹搜题 这是个老公众号了 支持手写输入,截图搜题,详细步骤,解题必备

《offer来了》第二章学习笔记

1.集合 Java四种集合:List、Queue、Set和Map 1.1.List:可重复 有序的Collection ArrayList: 基于数组实现,增删慢,查询快,线程不安全 Vector: 基于数组实现,增删慢,查询快,线程安全 LinkedList: 基于双向链实现,增删快,查询慢,线程不安全 1.2.Queue:队列 ArrayBlockingQueue:

硬件基础知识——自学习梳理

计算机存储分为闪存和永久性存储。 硬盘(永久存储)主要分为机械磁盘和固态硬盘。 机械磁盘主要靠磁颗粒的正负极方向来存储0或1,且机械磁盘没有使用寿命。 固态硬盘就有使用寿命了,大概支持30w次的读写操作。 闪存使用的是电容进行存储,断电数据就没了。 器件之间传输bit数据在总线上是一个一个传输的,因为通过电压传输(电流不稳定),但是电压属于电势能,所以可以叠加互相干扰,这也就是硬盘,U盘

人工智能机器学习算法总结神经网络算法(前向及反向传播)

1.定义,意义和优缺点 定义: 神经网络算法是一种模仿人类大脑神经元之间连接方式的机器学习算法。通过多层神经元的组合和激活函数的非线性转换,神经网络能够学习数据的特征和模式,实现对复杂数据的建模和预测。(我们可以借助人类的神经元模型来更好的帮助我们理解该算法的本质,不过这里需要说明的是,虽然名字是神经网络,并且结构等等也是借鉴了神经网络,但其原型以及算法本质上还和生物层面的神经网络运行原理存在