tensorflow 卷积神经网络 LeNet-5模型 MNIST手写体数字识别

本文主要是介绍tensorflow 卷积神经网络 LeNet-5模型 MNIST手写体数字识别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

____tz_zs小练习


案例来源《TensorFlow实战Google深度学习框架》

cnn/mnist_inference.py

# -*- coding: utf-8 -*-
"""
@author: tz_zs卷积神经网络 mnist_inference.py
"""import tensorflow as tf# 定义神经网络结构相关的参数
INPUT_NODE = 784
OUTPUT_NODE = 10IMAGE_SIZE = 28
NUM_CHANNELS = 1
NUM_LABELS = 10# 第一层卷积层的尺寸和深度
CONV1_DEEP = 32
CONV1_SIZE = 5# 第二层卷积层的尺寸和深度
CONV2_DEEP = 64
CONV2_SIZE = 5# 全连接层的节点个数
FC_SIZE = 512def inference(input_tensor, train, regularizer):with tf.variable_scope('layer1-conv1'):conv1_weights = tf.get_variable("weight", [CONV1_SIZE, CONV1_SIZE, NUM_CHANNELS, CONV1_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))conv1_biases = tf.get_variable("bias", [CONV1_DEEP], initializer=tf.constant_initializer(0.0))# print(conv1_weights)  # <tf.Variable 'layer1-conv1/weight:0' shape=(5, 5, 1, 32) dtype=float32_ref># print(conv1_biases)  # <tf.Variable 'layer1-conv1/bias:0' shape=(32,) dtype=float32_ref>conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))# print(conv1)  # Tensor("layer1-conv1/Conv2D:0", shape=(100, 28, 28, 32), dtype=float32)# print(relu1)  # Tensor("layer1-conv1/Relu:0", shape=(100, 28, 28, 32), dtype=float32)with tf.name_scope('layer2-pool1'):pool1 = tf.nn.max_pool(relu1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')# print(pool1)  # Tensor("layer2-pool1/MaxPool:0", shape=(100, 14, 14, 32), dtype=float32)with tf.variable_scope('layer3-conv2'):conv2_weights = tf.get_variable("weight", [CONV2_SIZE, CONV2_SIZE, CONV1_DEEP, CONV2_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))conv2_biases = tf.get_variable("bias", [CONV2_DEEP], initializer=tf.constant_initializer(0.0))conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME')relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))with tf.name_scope('layer4-pool2'):pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')pool_shape = pool2.get_shape().as_list()# print(pool_shape)# [100, 7, 7, 64]-------[5000, 7, 7, 64]nodes = pool_shape[1] * pool_shape[2] * pool_shape[3]reshaped = tf.reshape(pool2, [pool_shape[0], nodes])with tf.variable_scope('layer5-fc1'):fc1_weights = tf.get_variable("weight", [nodes, FC_SIZE],initializer=tf.truncated_normal_initializer(stddev=0.1))if regularizer != None:tf.add_to_collection('losses', regularizer(fc1_weights))fc1_biases = tf.get_variable("bias", [FC_SIZE], initializer=tf.constant_initializer(0.1))fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases)if train:fc1 = tf.nn.dropout(fc1, 0.5)with tf.variable_scope('layer6-fc2'):fc2_weights = tf.get_variable("weight", [FC_SIZE, NUM_LABELS],initializer=tf.truncated_normal_initializer(stddev=0.1))if regularizer != None:tf.add_to_collection('losses', regularizer(fc2_weights))fc2_biases = tf.get_variable("bias", [NUM_LABELS], initializer=tf.constant_initializer(0.1))logit = tf.matmul(fc1, fc2_weights) + fc2_biasesreturn logit


cnn/mnist_train.py

# -*- coding: utf-8 -*-
"""
@author: tz_zs卷积神经网络 mnist_train.py
"""
import tensorflow as tf
import os
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
# 加载函数
import cnn.mnist_inference as mnist_inference# 配置神经网络参数
BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.1
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 30000
MOVING_AVERAGE_DECAY = 0.99# 模型保存路径和文件名
MODEL_SAVE_PATH = "/path/to/model/cnn/"
MODEL_NAME = "model.ckpt"def train(mnist):# 定义输入输出的placeholder# x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name='x-input')x = tf.placeholder(tf.float32, [BATCH_SIZE,mnist_inference.IMAGE_SIZE,mnist_inference.IMAGE_SIZE,mnist_inference.NUM_CHANNELS],name='x-input')y_ = tf.placeholder(tf.float32, [BATCH_SIZE, mnist_inference.OUTPUT_NODE], name='y-input')# 定义正则化regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)# 使用前向传播y = mnist_inference.inference(x, True, regularizer)global_step = tf.Variable(0, trainable=False)# 滑动平均variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)variable_averages_op = variable_averages.apply(tf.trainable_variables())# print(tf.trainable_variables())# [<tf.Variable 'layer1-conv1/weight:0' shape=(5, 5, 1, 32) dtype=float32_ref>,# <tf.Variable 'layer1-conv1/bias:0' shape=(32,) dtype=float32_ref>,# <tf.Variable 'layer3-conv2/weight:0' shape=(5, 5, 32, 64) dtype=float32_ref>,# <tf.Variable 'layer3-conv2/bias:0' shape=(64,) dtype=float32_ref>,# <tf.Variable 'layer5-fc1/weight:0' shape=(3136, 512) dtype=float32_ref>,# <tf.Variable 'layer5-fc1/bias:0' shape=(512,) dtype=float32_ref>,# <tf.Variable 'layer6-fc2/weight:0' shape=(512, 10) dtype=float32_ref>,# <tf.Variable 'layer6-fc2/bias:0' shape=(10,) dtype=float32_ref>]# 损失函数cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(y_, 1), logits=y)cross_entropy_mean = tf.reduce_mean(cross_entropy)loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))# print(tf.get_collection('losses'))# #[<tf.Tensor 'layer5-fc1/l2_regularizer:0' shape=() dtype=float32>,# <tf.Tensor 'layer6-fc2/l2_regularizer:0' shape=() dtype=float32>]# 学习率learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE,global_step,mnist.train.num_examples / BATCH_SIZE,LEARNING_RATE_DECAY)# 优化算法train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)with tf.control_dependencies([train_step, variable_averages_op]):train_op = tf.no_op(name="train")# 持久化saver = tf.train.Saver()config = tf.ConfigProto()config.gpu_options.allow_growth = Truewith tf.Session(config=config) as sess:tf.global_variables_initializer().run()for i in range(TRAINING_STEPS):xs, ys = mnist.train.next_batch(BATCH_SIZE)# 调整为四维矩阵reshaped_xs = np.reshape(xs, [BATCH_SIZE,mnist_inference.IMAGE_SIZE,mnist_inference.IMAGE_SIZE,mnist_inference.NUM_CHANNELS])# 运行_, loss_valuue, step = sess.run([train_op, loss, global_step], feed_dict={x: reshaped_xs, y_: ys})# 每1000轮保存一次模型if i % 1000 == 0:print("After %d training step(s), loss on training batch is %g." % (step, loss_valuue))saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step)def main(argv=None):mnist = input_data.read_data_sets("/tmp/data", one_hot=True)train(mnist)if __name__ == '__main__':tf.app.run()'''
运行结果:
After 1 training step(s), loss on training batch is 6.73231.
After 1001 training step(s), loss on training batch is 0.730202.
After 2001 training step(s), loss on training batch is 0.644094.
After 3001 training step(s), loss on training batch is 0.640496.
After 4001 training step(s), loss on training batch is 0.634515.
After 5001 training step(s), loss on training batch is 0.64231.
After 6001 training step(s), loss on training batch is 0.581734.
After 7001 training step(s), loss on training batch is 0.590254.
After 8001 training step(s), loss on training batch is 0.546791.
After 9001 training step(s), loss on training batch is 0.553352.
After 10001 training step(s), loss on training batch is 0.526924.
After 11001 training step(s), loss on training batch is 0.516263.
After 12001 training step(s), loss on training batch is 0.510524.
After 13001 training step(s), loss on training batch is 0.530617.
After 14001 training step(s), loss on training batch is 0.500552.
After 15001 training step(s), loss on training batch is 0.49316.
After 16001 training step(s), loss on training batch is 0.478148.
After 17001 training step(s), loss on training batch is 0.470733.
After 18001 training step(s), loss on training batch is 0.471833.
After 19001 training step(s), loss on training batch is 0.456701.
After 20001 training step(s), loss on training batch is 0.451218.
After 21001 training step(s), loss on training batch is 0.446669.
After 22001 training step(s), loss on training batch is 0.440087.
After 23001 training step(s), loss on training batch is 0.43465.
After 24001 training step(s), loss on training batch is 0.428076.
After 25001 training step(s), loss on training batch is 0.42475.
After 26001 training step(s), loss on training batch is 0.416584.
After 27001 training step(s), loss on training batch is 0.428798.
After 28001 training step(s), loss on training batch is 0.406561.
After 29001 training step(s), loss on training batch is 0.404045.
'''

cnn/mnist_eval.py

# -*- coding: utf-8 -*-
"""
@author: tz_zs卷积神经网络 测试程序 mnist_eval.py
"""
import time
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_dataimport cnn.mnist_inference
import cnn.mnist_trainimport numpy as np# 每十秒加载一次最新的模型,并在测试数据上测试最新模型的准确率
EVAL_INTERVAL_SECS = 10def evaluate(mnist):with tf.Graph().as_default() as g:# x = tf.placeholder(tf.float32, [None, mnist_inference.INPUT_NODE], name="x-input")x = tf.placeholder(tf.float32, [mnist.validation.num_examples,cnn.mnist_inference.IMAGE_SIZE,cnn.mnist_inference.IMAGE_SIZE,cnn.mnist_inference.NUM_CHANNELS],name='x-input')y_ = tf.placeholder(tf.float32, [mnist.validation.num_examples, cnn.mnist_inference.OUTPUT_NODE],name="y-input")# 数据输入调整为四维矩阵reshaped_xs = np.reshape(mnist.validation.images,[mnist.validation.num_examples,cnn.mnist_inference.IMAGE_SIZE,cnn.mnist_inference.IMAGE_SIZE,cnn.mnist_inference.NUM_CHANNELS])validate_feed = {x: reshaped_xs, y_: mnist.validation.labels}# 测试(测试时不用计算正则化损失)y = cnn.mnist_inference.inference(x, False, None)# 计算准确率correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))# 加载模型variable_averages = tf.train.ExponentialMovingAverage(cnn.mnist_train.MOVING_AVERAGE_DECAY)variables_to_restore = variable_averages.variables_to_restore()saver = tf.train.Saver(variables_to_restore)#print(variables_to_restore)# {'layer3-conv2/bias/ExponentialMovingAverage': <tf.Variable 'layer3-conv2/bias:0' shape=(64,) dtype=float32_ref>,# 'layer1-conv1/bias/ExponentialMovingAverage': <tf.Variable 'layer1-conv1/bias:0' shape=(32,) dtype=float32_ref>,# 'layer6-fc2/bias/ExponentialMovingAverage': <tf.Variable 'layer6-fc2/bias:0' shape=(10,) dtype=float32_ref>,# 'layer3-conv2/weight/ExponentialMovingAverage': <tf.Variable 'layer3-conv2/weight:0' shape=(5, 5, 32, 64) dtype=float32_ref>,# 'layer6-fc2/weight/ExponentialMovingAverage': <tf.Variable 'layer6-fc2/weight:0' shape=(512, 10) dtype=float32_ref>,# 'layer1-conv1/weight/ExponentialMovingAverage': <tf.Variable 'layer1-conv1/weight:0' shape=(5, 5, 1, 32) dtype=float32_ref>,# 'layer5-fc1/bias/ExponentialMovingAverage': <tf.Variable 'layer5-fc1/bias:0' shape=(512,) dtype=float32_ref>,# 'layer5-fc1/weight/ExponentialMovingAverage': <tf.Variable 'layer5-fc1/weight:0' shape=(3136, 512) dtype=float32_ref>}config = tf.ConfigProto()config.gpu_options.allow_growth = Truewhile True:with tf.Session(config=config) as sess:# 找到文件名ckpt = tf.train.get_checkpoint_state(cnn.mnist_train.MODEL_SAVE_PATH)# print(ckpt)# model_checkpoint_path: "/path/to/model/cnn/model.ckpt-4001"# all_model_checkpoint_paths: "/path/to/model/cnn/model.ckpt-1"# all_model_checkpoint_paths: "/path/to/model/cnn/model.ckpt-1001"# all_model_checkpoint_paths: "/path/to/model/cnn/model.ckpt-2001"# all_model_checkpoint_paths: "/path/to/model/cnn/model.ckpt-3001"# all_model_checkpoint_paths: "/path/to/model/cnn/model.ckpt-4001"if ckpt and ckpt.model_checkpoint_path:# 加载模型saver.restore(sess, ckpt.model_checkpoint_path)# 通过文件名获得模型保存时迭代的轮数global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]# 运算出数据accuracy_score = sess.run(accuracy, feed_dict=validate_feed)print("After %s training stpe(s), validation accuracy = %g" % (global_step, accuracy_score))else:print("No checkpoint file found")returntime.sleep(EVAL_INTERVAL_SECS)def main(argv=None):mnist = input_data.read_data_sets("/tmp/data", one_hot=True)evaluate(mnist)if __name__ == '__main__':tf.app.run()'''
After 1 training stpe(s), validation accuracy = 0.047
After 1001 training stpe(s), validation accuracy = 0.9848
After 2001 training stpe(s), validation accuracy = 0.9894
After 3001 training stpe(s), validation accuracy = 0.9904
...
...
After 27001 training stpe(s), validation accuracy = 0.9932
After 28001 training stpe(s), validation accuracy = 0.9926
After 29001 training stpe(s), validation accuracy = 0.993
'''


这篇关于tensorflow 卷积神经网络 LeNet-5模型 MNIST手写体数字识别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/954356

相关文章

Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)

《Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)》:本文主要介绍Python基于火山引擎豆包大模型搭建QQ机器人详细的相关资料,包括开通模型、配置APIKEY鉴权和SD... 目录豆包大模型概述开通模型付费安装 SDK 环境配置 API KEY 鉴权Ark 模型接口Prompt

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

从去中心化到智能化:Web3如何与AI共同塑造数字生态

在数字时代的演进中,Web3和人工智能(AI)正成为塑造未来互联网的两大核心力量。Web3的去中心化理念与AI的智能化技术,正相互交织,共同推动数字生态的变革。本文将探讨Web3与AI的融合如何改变数字世界,并展望这一新兴组合如何重塑我们的在线体验。 Web3的去中心化愿景 Web3代表了互联网的第三代发展,它基于去中心化的区块链技术,旨在创建一个开放、透明且用户主导的数字生态。不同于传统

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

usaco 1.2 Name That Number(数字字母转化)

巧妙的利用code[b[0]-'A'] 将字符ABC...Z转换为数字 需要注意的是重新开一个数组 c [ ] 存储字符串 应人为的在末尾附上 ‘ \ 0 ’ 详见代码: /*ID: who jayLANG: C++TASK: namenum*/#include<stdio.h>#include<string.h>int main(){FILE *fin = fopen (

Retrieval-based-Voice-Conversion-WebUI模型构建指南

一、模型介绍 Retrieval-based-Voice-Conversion-WebUI(简称 RVC)模型是一个基于 VITS(Variational Inference with adversarial learning for end-to-end Text-to-Speech)的简单易用的语音转换框架。 具有以下特点 简单易用:RVC 模型通过简单易用的网页界面,使得用户无需深入了

透彻!驯服大型语言模型(LLMs)的五种方法,及具体方法选择思路

引言 随着时间的发展,大型语言模型不再停留在演示阶段而是逐步面向生产系统的应用,随着人们期望的不断增加,目标也发生了巨大的变化。在短短的几个月的时间里,人们对大模型的认识已经从对其zero-shot能力感到惊讶,转变为考虑改进模型质量、提高模型可用性。 「大语言模型(LLMs)其实就是利用高容量的模型架构(例如Transformer)对海量的、多种多样的数据分布进行建模得到,它包含了大量的先验

图神经网络模型介绍(1)

我们将图神经网络分为基于谱域的模型和基于空域的模型,并按照发展顺序详解每个类别中的重要模型。 1.1基于谱域的图神经网络         谱域上的图卷积在图学习迈向深度学习的发展历程中起到了关键的作用。本节主要介绍三个具有代表性的谱域图神经网络:谱图卷积网络、切比雪夫网络和图卷积网络。 (1)谱图卷积网络 卷积定理:函数卷积的傅里叶变换是函数傅里叶变换的乘积,即F{f*g}

秋招最新大模型算法面试,熬夜都要肝完它

💥大家在面试大模型LLM这个板块的时候,不知道面试完会不会复盘、总结,做笔记的习惯,这份大模型算法岗面试八股笔记也帮助不少人拿到过offer ✨对于面试大模型算法工程师会有一定的帮助,都附有完整答案,熬夜也要看完,祝大家一臂之力 这份《大模型算法工程师面试题》已经上传CSDN,还有完整版的大模型 AI 学习资料,朋友们如果需要可以微信扫描下方CSDN官方认证二维码免费领取【保证100%免费