TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别

2024-06-05 19:48

本文主要是介绍TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

github:https://github.com/MichaelBeechan
CSDN:https://blog.csdn.net/u011344545

涉及代码:https://github.com/MichaelBeechan/Learning_TensorFlow-Kaggle_MNIST 欢迎Fork和Star

Learning_TensorFlow-Kaggle_MNIS

一步步带你通过项目(MNIST手写识别)学习入门TensorFlow以及神经网络的知识

**

TF_Variable:TensorFlow入门

**

# -*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: tensorflow变量初始化
https://baike.baidu.com/item/TensorFlow/18828108?fr=aladdin
"""
import tensorflow as tf
# 变量定义
w = tf.Variable([[0.5, 1.0]])
x = tf.Variable([[2.0], [1.0]])
# 矩阵乘法
y = tf.matmul(w, x)
print(y)# 函数
norm = tf.random_normal([2, 3], mean = -1, stddev = 4)
c = tf.constant([[1, 2], [3, 4], [5, 6]])
shuff = tf.random_shuffle(c)  # shuffle洗牌
sess = tf.Session()
print(sess.run(norm))
print(sess.run(shuff))
# 将numpy的一些数据转换为tensorflow能用的类型
import numpy as np
a = np.zeros((3, 3))
ta = tf.convert_to_tensor(a)
print(sess.run(ta))# 创建一个变量 并用for循环对变量进行赋值操作
num  =tf.Variable(0, name="count")
new_value = tf.add(num, 10)
op = tf.assign(num, new_value)
print(op)
# 初始化全局变量
init_op = tf.global_variables_initializer()
# 定义运行会话
with tf.Session() as sess:sess.run(init_op)print(sess.run(num))for i in range(5):sess.run(op)print(sess.run(num))# 通过feed设置placeholder的值
# 声明变量是不赋值,计算时进行赋值  使用feed
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
value_new = tf.multiply(input1, input2)with tf.Session() as sess:print(sess.run(value_new, feed_dict={input1:23.0, input2:11.0}))

**

Kaggle_mnist

**
使用softMax作为激活函数,交叉熵做损失函数,梯度下降法优化的单层神经网络学习识别
准确率:88%左右

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: Kaggle MINIST 手写图片识别  Digit Recognizer
http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_beginners.html
"""
"""
一、数据的准备
二、模型的设计
三、代码实现
28*28 = 784 的二维数组
训练数据和测试数据,都可以分别转化为[42000,769]和[28000,768]的数组
模型建立:
1)使用一个最简单的单层的神经网络进行学习
2)用SoftMax来做为激活函数
3)用交叉熵来做损失函数
4)用梯度下降来做优化方式
"""#88.45% 识别正确率
import pandas as pd
import numpy as np
import tensorflow as tf#加载数据
train = pd.read_csv("train.csv")
images = train.iloc[:, 1:].values
#labels_flat = train[[0]].values.ravel()
labels_flat = train.iloc[:, 0].values.ravel()#输入处理
images = images.astype(np.float)
images = np.multiply(images, 1.0 / 255.0)
print("输入数据的数量:(%g, %g)" % images.shape)
images_size = images.shape[1]
images_width = images_height = np.ceil(np.sqrt(images_size)).astype(np.uint8)
print("图片的长 = {0}\n图片的高 = {1}".format(images_width, images_height))x = tf.placeholder('float', shape=[None, images_size])#结果处理
labels_count = np.unique(labels_flat).shape[0]
print('结果的种类 = {0}'.format(labels_count))
y = tf.placeholder('float', shape=[None, labels_count])#One-Hot编码 :离散特征处理——独热编码  scikit_learn有封装了现成的编码函数OneHotEncoder()
def dense_to_one_hot(labels_dense, num_calsses):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_calsseslabels_one_hot = np.zeros((num_labels, num_calsses))#flat返回的是一个迭代器labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hotlabels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)
print('结果的数量:({0[0]}, {0[1]})'.format(labels.shape))#数据划分
VALIDATION_SIZE = 2000validation_images = images[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]train_images = images[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]batch_size = 100
n_batch = len(train_images)//batch_size#建立神经网络
weight = tf.Variable(tf.zeros([784, 10]))
biases = tf.Variable(tf.zeros([10]))
result = tf.matmul(x, weight) + biases
prediction = tf.nn.softmax(result)loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=prediction))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)init = tf.global_variables_initializer()correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))with tf.Session() as sess:sess.run(init)for epoch in range(50):for batch in range(n_batch):batch_x = train_images[batch * batch_size:(batch+1) * batch_size]batch_y = train_labels[batch * batch_size:(batch+1) * batch_size]sess.run(train_step, feed_dict={x:batch_x, y:batch_y})accuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels})print("第"+str(epoch+1)+"轮,准确度为:" + str(accuracy_n))```**

CNN_mnist

卷积神经网络——卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
准确率:训练20 accuracy is 0.984

#-*- coding:utf-8 -*-
"""
Name: Michael Beechan
School: Chongqing University of Technology
Time: 2018.10.4
Description: MINIST Digit Recognizer CNN
https://www.zhihu.com/question/52668301
"""
#卷积层1+池化层1+卷积层2+池化层2+全连接1+Dropout层+输出层
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plot
from tensorflow.examples.tutorials.mnist import input_data
import pandas as pd#Add data
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")#Get data and deal data  astype()转换数据类型
x_train = train.iloc[:, 1:].values
x_train = x_train.astype(np.float)
x_train = np.multiply(x_train, 1.0 / 255.0)#Get image width and height
image_size = x_train.shape[1]
images_width = images_height = np.ceil(np.sqrt(image_size)).astype(np.uint8)print('数据样本大小:(%g, %g)' % x_train.shape)
print('图像的维度大小:{0}'.format(image_size))
print('图像长度:{0}\n高度:{1}'.format(images_width, images_height))#Get data labels
labels_flat = train.iloc[:, 0].values.ravel()
#对于一维数组或者列表,unique函数去除其中重复的元素,并按元素由大到小返回一个新的无元素重复的元组或者列表
labels_count = np.unique(labels_flat).shape[0]#One-Hot function
def dense_to_one_hot(labels_dense, num_classes):num_labels = labels_dense.shape[0]index_offset = np.arange(num_labels) * num_classeslabels_one_hot = np.zeros((num_labels, num_classes))labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1return labels_one_hot#one-hot deal labels
labels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)print('标签({0[0]}, {0[1]})'.format(labels.shape))
print('图像标签Example:[{0}] --> {1}'.format(25, labels[25]))#Divide train data to train and validation
VALIDATION_SIZE = 2000
train_images = x_train[VALIDATION_SIZE:]
train_labels = labels[VALIDATION_SIZE:]validation_images = x_train[:VALIDATION_SIZE]
validation_labels = labels[:VALIDATION_SIZE]#set batch size and get the sum total of batch
batch_size = 100
n_batch = len(train_images) // batch_size#define Empty variable (data)x: 784 (labels)y: 10
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])#define function to deal data
def weight_variable(shape):#initial weight --- normal distribution#一个截断的产生正太分布的函数,就是说产生正太分布的值如果与均值的差值大于两倍的标准差,那就重新生成initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial)def bias_variable(shape):# initial bias -- nonzeroinitial = tf.constant(0.1, shape=shape)return tf.Variable(initial)#packaging TensorFlow 2D convolution
def conv2D(x, W):return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
#packaging Tensorflow Pooling layer
def max_pool_2x2(x):return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')#Transform input data to 4D tensor, 2 and 3 is width and height, 4 is color
x_image = tf.reshape(x, [-1, 28, 28, 1])#compute 32 features 3*3 patch
w_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])#28*28 images conv step-size is 1   2*2 max pool
#After pool [28/2, 28/2] = [14, 14] the second pool [14/2, 14/2] = [7, 7]
#conv data
h_conv1 = tf.nn.relu(conv2D(x_image, w_conv1) + b_conv1)
#pool result
h_pool1 = max_pool_2x2(h_conv1)#On the previous basis, generate 64 features
w_conv2 = weight_variable([6, 6, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2D(h_pool1, w_conv2) + b_conv2)#max_pool 2*2 --> [7, 7]
h_pool2 = max_pool_2x2(h_conv2)
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])#Fully connected neural network  1024 Neural
w_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)#Dropout
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#1024 to 10D output
w_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, w_fc2) + b_fc2#build loss function --> cross entropy
#tf.nn.softmax_cross_entropy_with_logits
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y, logits=y_conv))
#optimizing para
train_step_1 = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(loss)#compute accuracy
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_conv, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))#Set the filename parameter to save the model
global_step = tf.Variable(0, name='globle_step', trainable=False)
saver  =tf.train.Saver()#initial variable
init = tf.global_variables_initializer()#train
with tf.Session() as sess:sess.run(init)# saver.restore(sess, "model.ckpt-12")# iter 20for epoch in range(1, 20):for batch in range(n_batch):# each times get one data patch to trainbatch_x = train_images[(batch) * batch_size:(batch+1) * batch_size]batch_y = train_labels[(batch) * batch_size:(batch+1) * batch_size]# the most important step -->sess.run(train_step_1, feed_dict={x:batch_x, y:batch_y, keep_prob:0.5})# each period compute accuracyaccuracy_n = sess.run(accuracy, feed_dict={x:validation_images, y:validation_labels, keep_prob:1.0})print("The " + str(epoch+1) + "th, accuracy is " + str(accuracy_n))# save train model# global_step.assign(epoch).eval()# saver.save(sess, "model.ckpt", global_step=global_step)

接下来改进方案进一步提高准确率。。。。。使用大神的自归一化神经网络

这篇关于TensorFlow | 使用Tensorflow带你实现MNIST手写字体识别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1034009

相关文章

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

【C++】_list常用方法解析及模拟实现

相信自己的力量,只要对自己始终保持信心,尽自己最大努力去完成任何事,就算事情最终结果是失败了,努力了也不留遗憾。💓💓💓 目录   ✨说在前面 🍋知识点一:什么是list? •🌰1.list的定义 •🌰2.list的基本特性 •🌰3.常用接口介绍 🍋知识点二:list常用接口 •🌰1.默认成员函数 🔥构造函数(⭐) 🔥析构函数 •🌰2.list对象

【Prometheus】PromQL向量匹配实现不同标签的向量数据进行运算

✨✨ 欢迎大家来到景天科技苑✨✨ 🎈🎈 养成好习惯,先赞后看哦~🎈🎈 🏆 作者简介:景天科技苑 🏆《头衔》:大厂架构师,华为云开发者社区专家博主,阿里云开发者社区专家博主,CSDN全栈领域优质创作者,掘金优秀博主,51CTO博客专家等。 🏆《博客》:Python全栈,前后端开发,小程序开发,人工智能,js逆向,App逆向,网络系统安全,数据分析,Django,fastapi

让树莓派智能语音助手实现定时提醒功能

最初的时候是想直接在rasa 的chatbot上实现,因为rasa本身是带有remindschedule模块的。不过经过一番折腾后,忽然发现,chatbot上实现的定时,语音助手不一定会有响应。因为,我目前语音助手的代码设置了长时间无应答会结束对话,这样一来,chatbot定时提醒的触发就不会被语音助手获悉。那怎么让语音助手也具有定时提醒功能呢? 我最后选择的方法是用threading.Time