【tensorflow CNN】构建cnn网络,识别mnist手写数字识别

2024-09-07 06:38

本文主要是介绍【tensorflow CNN】构建cnn网络,识别mnist手写数字识别,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

#coding:utf8
"""
构建cnn网络,识别mnist
input                   conv1                   padding   max_pool([2,2],strides=[2,2])               conv2      
x[-1,28,28,1] 卷积 [5,5,1,32] -> [-1,24,24,32]->[-1,28,28,32]->[-1,14,14,32]       卷积      [5,5,32,64]->[-1,10,10,64]padding   max_pool([2,2],strides=[2,2])  flatten    full connected layer 64个节点    dropout   softmax
-> [-1,14,14,64]->[-1,7,7,64] ->           [7*7*64,-1]  矩阵相乘 [64,7*7*64]  """from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import mathmnist = input_data.read_data_sets("./mnist/",one_hot=True)print("Training set:",mnist.train.images.shape)
print("Training set labels:",mnist.train.labels.shape)
print("Dev Set(Cross Validation set):",mnist.validation.images.shape)
print("Dev Set labels:",mnist.validation.labels.shape)
print("Test Set:",mnist.test.images.shape)
print("Test Set labels:",mnist.test.labels.shape)x_train = mnist.train.images
y_train = mnist.train.labels
x_dev = mnist.validation.images
y_dev = mnist.validation.labels
x_test = mnist.test.images
y_test = mnist.test.labelsdef create_placeholders(n_x, n_y):"""Creates the placeholders for the tensorflow session.Arguments:n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)n_y -- scalar, number of classes (from 0 to 5, so -> 6)Returns:X -- placeholder for the data input, of shape [n_x, None] and dtype "float"Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"Tips:- You will use None because it let's us be flexible on the number of examples you will for the placeholders.In fact, the number of examples during test/train is different."""### START CODE HERE ### (approx. 2 lines)X = tf.placeholder(tf.float32, [None,n_x], name="X")Y = tf.placeholder(tf.float32, [None,n_y], name="Y")### END CODE HERE ###return X, Ydef random_mini_batches(X, Y, mini_batch_size=64):"""Creates a list of random minibatches from (X, Y)Arguments:X -- input data, of shape (input size, number of examples)Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)mini_batch_size -- size of the mini-batches, integerReturns:mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)"""m = X.shape[1]  # 训练样本个数mini_batches = []# Step 1: Shuffle (X, Y)permutation = list(np.random.permutation(m))shuffled_X = X[permutation,:]shuffled_Y = Y[permutation,:]# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.num_complete_minibatches = math.floor(m / mini_batch_size)  # number of mini batches of size mini_batch_size in your partitionningfor k in range(0, num_complete_minibatches):mini_batch_X = shuffled_X[k * mini_batch_size:(k + 1) * mini_batch_size,:]mini_batch_Y = shuffled_Y[k * mini_batch_size:(k + 1) * mini_batch_size,:]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)# Handling the end case (last mini-batch < mini_batch_size)if m % mini_batch_size != 0:mini_batch_X = shuffled_X[mini_batch_size * num_complete_minibatches:,:]mini_batch_Y = shuffled_Y[mini_batch_size * num_complete_minibatches:,:]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)return mini_batchesdef parameter_init(shape):return tf.Variable(tf.random_normal(shape))#定义一个函数,用于构建卷积层
def conv2d(x, W):return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')#定义一个函数,用于构建池化层
def max_pool(x):return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')def init_parameters():parameters = {}parameters["W_conv1"] = parameter_init([5,5,1,32])parameters["b_conv1"] = parameter_init([32])parameters["W_conv2"] = parameter_init([5,5,32,64])parameters["b_conv2"] = parameter_init([64])parameters["W_fc1"] = parameter_init([7*7*64,256])parameters["b_fc1"] = parameter_init([256])parameters["W_out"] = parameter_init([256,10])parameters["b_out"] = parameter_init([10])return parametersdef forward_propagation(x,parameters):x = tf.reshape(x,[-1,28,28,1])W_conv1 = parameters["W_conv1"]b_conv1 = parameters["b_conv1"]h_conv1 = tf.nn.relu(conv2d(x,W_conv1)+b_conv1)  #第一个卷积层h_pool1 = max_pool(h_conv1)                      #第一个池化层W_conv2 = parameters["W_conv2"]b_conv2 = parameters["b_conv2"]h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)  #第二个全连接层h_pool2 = max_pool(h_conv2)                            #第二个池化层W_fc1 = parameters["W_fc1"]             #第一个全连接层,256个节点b_fc1 = parameters["b_fc1"]h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])           #第二个池化层flatten to 1 dimensional vectorh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1)#keep_prob = tf.placeholder("float")#h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)                  #dropout层W_out = parameters["W_out"]b_out = parameters["b_out"]Z = tf.matmul(h_fc1,W_out) + b_outreturn Zdef compute_cost(Z, Y):"""Computes the costArguments:Z -- output of forward propagation (output of the last LINEAR unit), of shape (10, number of examples)Y -- "true" labels vector placeholder, same shape as Z3Returns:cost - Tensor of the cost function"""# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)logits = Zlabels = Ycost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))return costdef tf_cnn_model(X_train,Y_train,X_test,Y_test,learning_rate=0.001,num_epochs=100,minibatch_size=64,print_cost=True):(m,n_x) = X_train.shape  # (n_x: input size, m : number of examples in the train set)n_y = Y_train.shape[1]  # n_y : output sizecosts = []  # to keep track of the costX = tf.placeholder(tf.float32, [None,n_x], name="X")Y = tf.placeholder(tf.float32, [None,n_y], name="Y")parameters = init_parameters()Z = forward_propagation(X,parameters)cost = compute_cost(Z, Y)optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)init = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init)for epoch in range(num_epochs):epoch_cost = 0.  # Defines a cost related to an epochnum_minibatches = int(m / minibatch_size)minibatches = random_mini_batches(X_train, Y_train, minibatch_size)for minibatch in minibatches:(minibatch_X, minibatch_Y) = minibatch_, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})epoch_cost += minibatch_cost / num_minibatches#if print_cost == True and epoch % 10 == 0:print("Cost after epoch %i: %f" % (epoch, epoch_cost))#if print_cost == True and epoch % 5 == 0:costs.append(epoch_cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()parameters = sess.run(parameters)print("Parameters have been trained!")correct_prediction = tf.equal(tf.argmax(Z,1), tf.argmax(Y,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))"""# Training cyclefor epoch in range(num_epochs):epoch_cost = 0.total_batch = int(mnist.train.num_examples/minibatch_size)# Loop over all batchesfor i in range(total_batch):batch_x, batch_y = mnist.train.next_batch(minibatch_size)# Run optimization op (backprop) and cost op (to get loss value)_, minibatch_cost = sess.run([optimizer, cost], feed_dict={X: batch_x,Y: batch_y})# Compute average lossepoch_cost += minibatch_cost / total_batchif print_cost == True and epoch % 10 == 0:print("Cost after epoch %i: %f" % (epoch, epoch_cost))if print_cost == True and epoch % 5 == 0:costs.append(epoch_cost)plt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()parameters = sess.run(parameters)        print("Optimization Finished!")# Test modelcorrect_prediction = tf.equal(tf.argmax(Z, 1), tf.argmax(Y, 1))# Calculate accuracyaccuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print("Accuracy:", accuracy.eval({x: X_test, y: Y_test}))"""return parameterstf_cnn_model(x_train, y_train, x_test, y_test,learning_rate=0.001, num_epochs=40,minibatch_size=64, print_cost=True)

这篇关于【tensorflow CNN】构建cnn网络,识别mnist手写数字识别的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1144338

相关文章

SSID究竟是什么? WiFi网络名称及工作方式解析

《SSID究竟是什么?WiFi网络名称及工作方式解析》SID可以看作是无线网络的名称,类似于有线网络中的网络名称或者路由器的名称,在无线网络中,设备通过SSID来识别和连接到特定的无线网络... 当提到 Wi-Fi 网络时,就避不开「SSID」这个术语。简单来说,SSID 就是 Wi-Fi 网络的名称。比如

Java实现任务管理器性能网络监控数据的方法详解

《Java实现任务管理器性能网络监控数据的方法详解》在现代操作系统中,任务管理器是一个非常重要的工具,用于监控和管理计算机的运行状态,包括CPU使用率、内存占用等,对于开发者和系统管理员来说,了解这些... 目录引言一、背景知识二、准备工作1. Maven依赖2. Gradle依赖三、代码实现四、代码详解五

Python中构建终端应用界面利器Blessed模块的使用

《Python中构建终端应用界面利器Blessed模块的使用》Blessed库作为一个轻量级且功能强大的解决方案,开始在开发者中赢得口碑,今天,我们就一起来探索一下它是如何让终端UI开发变得轻松而高... 目录一、安装与配置:简单、快速、无障碍二、基本功能:从彩色文本到动态交互1. 显示基本内容2. 创建链

Golang使用etcd构建分布式锁的示例分享

《Golang使用etcd构建分布式锁的示例分享》在本教程中,我们将学习如何使用Go和etcd构建分布式锁系统,分布式锁系统对于管理对分布式系统中共享资源的并发访问至关重要,它有助于维护一致性,防止竞... 目录引言环境准备新建Go项目实现加锁和解锁功能测试分布式锁重构实现失败重试总结引言我们将使用Go作

从去中心化到智能化:Web3如何与AI共同塑造数字生态

在数字时代的演进中,Web3和人工智能(AI)正成为塑造未来互联网的两大核心力量。Web3的去中心化理念与AI的智能化技术,正相互交织,共同推动数字生态的变革。本文将探讨Web3与AI的融合如何改变数字世界,并展望这一新兴组合如何重塑我们的在线体验。 Web3的去中心化愿景 Web3代表了互联网的第三代发展,它基于去中心化的区块链技术,旨在创建一个开放、透明且用户主导的数字生态。不同于传统

嵌入式QT开发:构建高效智能的嵌入式系统

摘要: 本文深入探讨了嵌入式 QT 相关的各个方面。从 QT 框架的基础架构和核心概念出发,详细阐述了其在嵌入式环境中的优势与特点。文中分析了嵌入式 QT 的开发环境搭建过程,包括交叉编译工具链的配置等关键步骤。进一步探讨了嵌入式 QT 的界面设计与开发,涵盖了从基本控件的使用到复杂界面布局的构建。同时也深入研究了信号与槽机制在嵌入式系统中的应用,以及嵌入式 QT 与硬件设备的交互,包括输入输出设

阿里开源语音识别SenseVoiceWindows环境部署

SenseVoice介绍 SenseVoice 专注于高精度多语言语音识别、情感辨识和音频事件检测多语言识别: 采用超过 40 万小时数据训练,支持超过 50 种语言,识别效果上优于 Whisper 模型。富文本识别:具备优秀的情感识别,能够在测试数据上达到和超过目前最佳情感识别模型的效果。支持声音事件检测能力,支持音乐、掌声、笑声、哭声、咳嗽、喷嚏等多种常见人机交互事件进行检测。高效推

Linux 网络编程 --- 应用层

一、自定义协议和序列化反序列化 代码: 序列化反序列化实现网络版本计算器 二、HTTP协议 1、谈两个简单的预备知识 https://www.baidu.com/ --- 域名 --- 域名解析 --- IP地址 http的端口号为80端口,https的端口号为443 url为统一资源定位符。CSDNhttps://mp.csdn.net/mp_blog/creation/editor

usaco 1.2 Name That Number(数字字母转化)

巧妙的利用code[b[0]-'A'] 将字符ABC...Z转换为数字 需要注意的是重新开一个数组 c [ ] 存储字符串 应人为的在末尾附上 ‘ \ 0 ’ 详见代码: /*ID: who jayLANG: C++TASK: namenum*/#include<stdio.h>#include<string.h>int main(){FILE *fin = fopen (

Retrieval-based-Voice-Conversion-WebUI模型构建指南

一、模型介绍 Retrieval-based-Voice-Conversion-WebUI(简称 RVC)模型是一个基于 VITS(Variational Inference with adversarial learning for end-to-end Text-to-Speech)的简单易用的语音转换框架。 具有以下特点 简单易用:RVC 模型通过简单易用的网页界面,使得用户无需深入了