如何用Tensorflow训练模型成pb文件(二)——基于tfrecord的读取

2024-06-08 22:32

本文主要是介绍如何用Tensorflow训练模型成pb文件(二)——基于tfrecord的读取,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

简介

上一篇介绍了基于原始图片的读取,这一篇介绍基于TFRecord的读取。TFRecord是TensorFlow提供的数据读取格式,效率高。这里不介绍TFRecord的制作过程,网上有很多,假设你已经了解了。

训练

定义网络结构,与上一篇相似,不多说了,也是placeholder name=”input”等,但是这里多了inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv2_2),是进行了batch normalization处理,发现效果很不错,训练加快了很多,起先训练如同随机猜测,准确率只有0.5左右。

def build_network(height, width, channel):x = tf.placeholder(tf.float32, shape=[None, height, width, channel], name="input")y = tf.placeholder(tf.int32, shape=[None, n_classes], name="labels_placeholder")def weight_variable(shape, name="weights"):initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial, name=name)def bias_variable(shape, name="biases"):initial = tf.constant(0.1, shape=shape)return tf.Variable(initial, name=name)def conv2d(input, w):return tf.nn.conv2d(input, w, [1, 1, 1, 1], padding='SAME')def pool_max(input):return tf.nn.max_pool(input,ksize=[1, 3, 3, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool1')def fc(input, w, b):return tf.matmul(input, w) + b# conv1with tf.name_scope('conv1_1') as scope:kernel = weight_variable([3, 3, Channels, 64])biases = bias_variable([64])conv1_1 = tf.nn.bias_add(conv2d(x, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv1_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv1_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv1_2') as scope:kernel = weight_variable([3, 3, 64, 64])biases = bias_variable([64])conv1_2 = tf.nn.bias_add(conv2d(output_conv1_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv1_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv1_2 = tf.nn.relu(conv_batch_norm, name=scope)pool1 = pool_max(output_conv1_2)# conv2with tf.name_scope('conv2_1') as scope:kernel = weight_variable([3, 3, 64, 128])biases = bias_variable([128])conv2_1 = tf.nn.bias_add(conv2d(pool1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv2_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv2_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv2_2') as scope:kernel = weight_variable([3, 3, 128, 128])biases = bias_variable([128])conv2_2 = tf.nn.bias_add(conv2d(output_conv2_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv2_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv2_2 = tf.nn.relu(conv_batch_norm, name=scope)pool2 = pool_max(output_conv2_2)# conv3with tf.name_scope('conv3_1') as scope:kernel = weight_variable([3, 3, 128, 256])biases = bias_variable([256])conv3_1 = tf.nn.bias_add(conv2d(pool2, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv3_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv3_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv3_2') as scope:kernel = weight_variable([3, 3, 256, 256])biases = bias_variable([256])conv3_2 = tf.nn.bias_add(conv2d(output_conv3_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv3_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv3_2 = tf.nn.relu(conv_batch_norm, name=scope)#     with tf.name_scope('conv3_3') as scope:
#         kernel = weight_variable([3, 3, 256, 256])
#         biases = bias_variable([256])
#         output_conv3_3 = tf.nn.relu(conv2d(output_conv3_2, kernel) + biases, name=scope)pool3 = pool_max(output_conv3_2)'''# conv4with tf.name_scope('conv4_1') as scope:kernel = weight_variable([3, 3, 256, 512])biases = bias_variable([512])output_conv4_1 = tf.nn.relu(conv2d(pool3, kernel) + biases, name=scope)with tf.name_scope('conv4_2') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv4_2 = tf.nn.relu(conv2d(output_conv4_1, kernel) + biases, name=scope)with tf.name_scope('conv4_3') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv4_3 = tf.nn.relu(conv2d(output_conv4_2, kernel) + biases, name=scope)pool4 = pool_max(output_conv4_3)# conv5with tf.name_scope('conv5_1') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_1 = tf.nn.relu(conv2d(pool4, kernel) + biases, name=scope)with tf.name_scope('conv5_2') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_2 = tf.nn.relu(conv2d(output_conv5_1, kernel) + biases, name=scope)with tf.name_scope('conv5_3') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_3 = tf.nn.relu(conv2d(output_conv5_2, kernel) + biases, name=scope)pool5 = pool_max(output_conv5_3)'''#fc6with tf.name_scope('fc6') as scope:shape = int(np.prod(pool3.get_shape()[1:]))kernel = weight_variable([shape, 120])#kernel = weight_variable([shape, 4096])#biases = bias_variable([4096])biases = bias_variable([120])pool5_flat = tf.reshape(pool3, [-1, shape])output_fc6 = tf.nn.relu(fc(pool5_flat, kernel, biases), name=scope)#fc7with tf.name_scope('fc7') as scope:#kernel = weight_variable([4096, 4096])#biases = bias_variable([4096])kernel = weight_variable([120, 100])biases = bias_variable([100])output_fc7 = tf.nn.relu(fc(output_fc6, kernel, biases), name=scope)#fc8with tf.name_scope('fc8') as scope:#kernel = weight_variable([4096, n_classes])kernel = weight_variable([100, n_classes])biases = bias_variable([n_classes])output_fc8 = tf.nn.relu(fc(output_fc7, kernel, biases), name=scope)finaloutput = tf.nn.softmax(output_fc8, name="softmax")cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=finaloutput, labels=y))*1000optimize = tf.train.AdamOptimizer(lr).minimize(cost)prediction_labels = tf.argmax(finaloutput, axis=1, name="output")read_labels = tf.argmax(y, axis=1)correct_prediction = tf.equal(prediction_labels, read_labels)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))correct_times_in_batch = tf.reduce_sum(tf.cast(correct_prediction, tf.int32))return dict(x=x,y=y,lr=lr,optimize=optimize,correct_prediction=correct_prediction,correct_times_in_batch=correct_times_in_batch,cost=cost,accuracy=accuracy,)

batch normalization的定义如下,用于conv层的:

def my_batch_norm(inputs):scale = tf.Variable(tf.ones([inputs.get_shape()[-1]]),dtype=tf.float32)beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]]),dtype=tf.float32)batch_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable=False)batch_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable=False)batch_mean, batch_var = tf.nn.moments(inputs,[0,1,2])return inputs, batch_mean, batch_var, beta, scale

训练网络,可以从下面代码中发现,每epoch_delta汇报一次训练和校验的损失值与准确率,每500次保存模型:

def train_network(graph, batch_size, num_epochs, pb_file_path):tra_image_batch, tra_label_batch = input_data.read_and_decode2stand(tfrecords_file=tra_data_dir,batch_size= batch_size)val_image_batch, val_label_batch = input_data.read_and_decode2stand(tfrecords_file=val_data_dir,batch_size= batch_size)init = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init)coord = tf.train.Coordinator()threads = tf.train.start_queue_runners(sess=sess, coord=coord)epoch_delta = 100try:for epoch_index in range(num_epochs):learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-epoch_index/decay_speed)tra_images,tra_labels = sess.run([tra_image_batch, tra_label_batch])accuracy,mean_cost_in_batch,return_correct_times_in_batch,_=sess.run([graph['accuracy'],graph['cost'],graph['correct_times_in_batch'],graph['optimize']], feed_dict={graph['x']: tra_images,graph['lr']:learning_rate,graph['y']: tra_labels})if epoch_index % epoch_delta == 0:# 开始在 train set上计算一下accuracy和costprint("index[%s]".center(50,'-')%epoch_index)print("Train: cost_in_batch:{},correct_in_batch:{},accuracy:{}".format(mean_cost_in_batch,return_correct_times_in_batch,accuracy))# 开始在 test set上计算一下accuracy和costval_images, val_labels = sess.run([val_image_batch, val_label_batch])mean_cost_in_batch,return_correct_times_in_batch = sess.run([graph['cost'],graph['correct_times_in_batch']], feed_dict={graph['x']: val_images,graph['y']: val_labels})print("***Val: cost_in_batch:{},correct_in_batch:{},accuracy:{}".format(mean_cost_in_batch,return_correct_times_in_batch,return_correct_times_in_batch/batch_size))if epoch_index % 500 == 0: constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ["output"])with tf.gfile.FastGFile(pb_file_path, mode='wb') as f:f.write(constant_graph.SerializeToString())except tf.errors.OutOfRangeError:print('Done training -- epoch limit reached')finally:coord.request_stop()coord.join(threads)sess.close()

注意:
tra_images是4D数据

image_batch: 4D tensor - [batch_size, height, width, channel],

tra_labels是2D数据

label_batch: 2D tensor - [batch_size, n_classes]

input_data.py:

def read_and_decode2stand(tfrecords_file, batch_size):'''read and decode tfrecord file, generate (image, label) batchesArgs:tfrecords_file: the directory of tfrecord filebatch_size: number of images in each batchReturns:image_batch: 4D tensor - [batch_size, height, width, channel]label_batch: 2D tensor - [batch_size, n_classes]'''# make an input queue from the tfrecord filefilename_queue = tf.train.string_input_producer([tfrecords_file])reader = tf.TFRecordReader()_, serialized_example = reader.read(filename_queue)img_features = tf.parse_single_example(serialized_example,features={'label': tf.FixedLenFeature([], tf.int64),'image_raw': tf.FixedLenFeature([], tf.string),})image = tf.decode_raw(img_features['image_raw'], tf.uint8)image = tf.reshape(image, [H, W,channels])image = tf.cast(image, tf.float32) * (1.0 /255)image = tf.image.per_image_standardization(image)#standardization# all the images of notMNIST are 200*150, you need to change the image size if you use other dataset.label = tf.cast(img_features['label'], tf.int32)    image_batch, label_batch = tf.train.batch([image, label],batch_size= batch_size,num_threads= 64, capacity = 2000)#Change to ONE-HOT    label_batch = tf.one_hot(label_batch, depth= n_classes)label_batch = tf.cast(label_batch, dtype=tf.int32)label_batch = tf.reshape(label_batch, [batch_size, n_classes])print(label_batch)return image_batch, label_batch

给出全部的训练模型代码:

# -*- coding: utf-8 -*-
"""
Spyder EditorThis is a temporary script file.
"""import numpy as np
import math
import tensorflow as tf
from tensorflow.python.framework import graph_util
import input_datatra_data_dir = 'D://AutoSparePart//Train_Test_TF//train.tfrecords'
val_data_dir = 'D://AutoSparePart//Train_Test_TF//val.tfrecords'max_learning_rate = 0.002 #0.0002
min_learning_rate = 0.0001
decay_speed = 2000.0 
lr = tf.placeholder(tf.float32)
learning_rate = lr
W = 200
H = 150
Channels = 3
n_classes = 2def my_batch_norm(inputs):scale = tf.Variable(tf.ones([inputs.get_shape()[-1]]),dtype=tf.float32)beta = tf.Variable(tf.zeros([inputs.get_shape()[-1]]),dtype=tf.float32)batch_mean = tf.Variable(tf.zeros([inputs.get_shape()[-1]]), trainable=False)batch_var = tf.Variable(tf.ones([inputs.get_shape()[-1]]), trainable=False)batch_mean, batch_var = tf.nn.moments(inputs,[0,1,2])return inputs, batch_mean, batch_var, beta, scaledef build_network(height, width, channel):x = tf.placeholder(tf.float32, shape=[None, height, width, channel], name="input")y = tf.placeholder(tf.int32, shape=[None, n_classes], name="labels_placeholder")def weight_variable(shape, name="weights"):initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial, name=name)def bias_variable(shape, name="biases"):initial = tf.constant(0.1, shape=shape)return tf.Variable(initial, name=name)def conv2d(input, w):return tf.nn.conv2d(input, w, [1, 1, 1, 1], padding='SAME')def pool_max(input):return tf.nn.max_pool(input,ksize=[1, 3, 3, 1],strides=[1, 2, 2, 1],padding='SAME',name='pool1')def fc(input, w, b):return tf.matmul(input, w) + b# conv1with tf.name_scope('conv1_1') as scope:kernel = weight_variable([3, 3, Channels, 64])biases = bias_variable([64])conv1_1 = tf.nn.bias_add(conv2d(x, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv1_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv1_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv1_2') as scope:kernel = weight_variable([3, 3, 64, 64])biases = bias_variable([64])conv1_2 = tf.nn.bias_add(conv2d(output_conv1_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv1_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv1_2 = tf.nn.relu(conv_batch_norm, name=scope)pool1 = pool_max(output_conv1_2)# conv2with tf.name_scope('conv2_1') as scope:kernel = weight_variable([3, 3, 64, 128])biases = bias_variable([128])conv2_1 = tf.nn.bias_add(conv2d(pool1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv2_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv2_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv2_2') as scope:kernel = weight_variable([3, 3, 128, 128])biases = bias_variable([128])conv2_2 = tf.nn.bias_add(conv2d(output_conv2_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv2_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv2_2 = tf.nn.relu(conv_batch_norm, name=scope)pool2 = pool_max(output_conv2_2)# conv3with tf.name_scope('conv3_1') as scope:kernel = weight_variable([3, 3, 128, 256])biases = bias_variable([256])conv3_1 = tf.nn.bias_add(conv2d(pool2, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv3_1)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv3_1 = tf.nn.relu(conv_batch_norm, name=scope)with tf.name_scope('conv3_2') as scope:kernel = weight_variable([3, 3, 256, 256])biases = bias_variable([256])conv3_2 = tf.nn.bias_add(conv2d(output_conv3_1, kernel), biases)inputs, pop_mean, pop_var, beta, scale = my_batch_norm(conv3_2)conv_batch_norm = tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, scale, 0.001)output_conv3_2 = tf.nn.relu(conv_batch_norm, name=scope)#     with tf.name_scope('conv3_3') as scope:
#         kernel = weight_variable([3, 3, 256, 256])
#         biases = bias_variable([256])
#         output_conv3_3 = tf.nn.relu(conv2d(output_conv3_2, kernel) + biases, name=scope)pool3 = pool_max(output_conv3_2)'''# conv4with tf.name_scope('conv4_1') as scope:kernel = weight_variable([3, 3, 256, 512])biases = bias_variable([512])output_conv4_1 = tf.nn.relu(conv2d(pool3, kernel) + biases, name=scope)with tf.name_scope('conv4_2') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv4_2 = tf.nn.relu(conv2d(output_conv4_1, kernel) + biases, name=scope)with tf.name_scope('conv4_3') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv4_3 = tf.nn.relu(conv2d(output_conv4_2, kernel) + biases, name=scope)pool4 = pool_max(output_conv4_3)# conv5with tf.name_scope('conv5_1') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_1 = tf.nn.relu(conv2d(pool4, kernel) + biases, name=scope)with tf.name_scope('conv5_2') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_2 = tf.nn.relu(conv2d(output_conv5_1, kernel) + biases, name=scope)with tf.name_scope('conv5_3') as scope:kernel = weight_variable([3, 3, 512, 512])biases = bias_variable([512])output_conv5_3 = tf.nn.relu(conv2d(output_conv5_2, kernel) + biases, name=scope)pool5 = pool_max(output_conv5_3)'''#fc6with tf.name_scope('fc6') as scope:shape = int(np.prod(pool3.get_shape()[1:]))kernel = weight_variable([shape, 120])#kernel = weight_variable([shape, 4096])#biases = bias_variable([4096])biases = bias_variable([120])pool5_flat = tf.reshape(pool3, [-1, shape])output_fc6 = tf.nn.relu(fc(pool5_flat, kernel, biases), name=scope)#fc7with tf.name_scope('fc7') as scope:#kernel = weight_variable([4096, 4096])#biases = bias_variable([4096])kernel = weight_variable([120, 100])biases = bias_variable([100])output_fc7 = tf.nn.relu(fc(output_fc6, kernel, biases), name=scope)#fc8with tf.name_scope('fc8') as scope:#kernel = weight_variable([4096, n_classes])kernel = weight_variable([100, n_classes])biases = bias_variable([n_classes])output_fc8 = tf.nn.relu(fc(output_fc7, kernel, biases), name=scope)finaloutput = tf.nn.softmax(output_fc8, name="softmax")cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=finaloutput, labels=y))*1000optimize = tf.train.AdamOptimizer(lr).minimize(cost)prediction_labels = tf.argmax(finaloutput, axis=1, name="output")read_labels = tf.argmax(y, axis=1)correct_prediction = tf.equal(prediction_labels, read_labels)accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))correct_times_in_batch = tf.reduce_sum(tf.cast(correct_prediction, tf.int32))return dict(x=x,y=y,lr=lr,optimize=optimize,correct_prediction=correct_prediction,correct_times_in_batch=correct_times_in_batch,cost=cost,accuracy=accuracy,)def train_network(graph, batch_size, num_epochs, pb_file_path):tra_image_batch, tra_label_batch = input_data.read_and_decode2stand(tfrecords_file=tra_data_dir,batch_size= batch_size)val_image_batch, val_label_batch = input_data.read_and_decode2stand(tfrecords_file=val_data_dir,batch_size= batch_size)init = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init)coord = tf.train.Coordinator()threads = tf.train.start_queue_runners(sess=sess, coord=coord)epoch_delta = 100try:for epoch_index in range(num_epochs):learning_rate = min_learning_rate + (max_learning_rate - min_learning_rate) * math.exp(-epoch_index/decay_speed)tra_images,tra_labels = sess.run([tra_image_batch, tra_label_batch])accuracy,mean_cost_in_batch,return_correct_times_in_batch,_=sess.run([graph['accuracy'],graph['cost'],graph['correct_times_in_batch'],graph['optimize']], feed_dict={graph['x']: tra_images,graph['lr']:learning_rate,graph['y']: tra_labels})if epoch_index % epoch_delta == 0:# 开始在 train set上计算一下accuracy和costprint("index[%s]".center(50,'-')%epoch_index)print("Train: cost_in_batch:{},correct_in_batch:{},accuracy:{}".format(mean_cost_in_batch,return_correct_times_in_batch,accuracy))# 开始在 test set上计算一下accuracy和costval_images, val_labels = sess.run([val_image_batch, val_label_batch])mean_cost_in_batch,return_correct_times_in_batch = sess.run([graph['cost'],graph['correct_times_in_batch']], feed_dict={graph['x']: val_images,graph['y']: val_labels})print("***Val: cost_in_batch:{},correct_in_batch:{},accuracy:{}".format(mean_cost_in_batch,return_correct_times_in_batch,return_correct_times_in_batch/batch_size))if epoch_index % 500 == 0: constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ["output"])with tf.gfile.FastGFile(pb_file_path, mode='wb') as f:f.write(constant_graph.SerializeToString())except tf.errors.OutOfRangeError:print('Done training -- epoch limit reached')finally:coord.request_stop()coord.join(threads)sess.close()def main():batch_size = 40num_epochs = 5001pb_file_path = "./output/autosparepart.pb"g = build_network(height=H, width=W, channel=3)train_network(g, batch_size, num_epochs, pb_file_path)main()

给出训练批次大小batch_size = 40 ,迭代次数 num_epochs = 5001,这里的epochs更确确的讲是step,epochs是完成所有一次的。
训练结果:
这里写图片描述

测试

测试环节很简单,只有几行的代码:

'''
Created on 2017年9月9日@author: admin
'''
import matplotlib.pyplot as plt
import tensorflow as tf
import  numpy as np
import PIL.Image as Image
from skimage import transform
W = 200
H = 150
def recognize(jpg_path, pb_file_path):with tf.Graph().as_default():output_graph_def = tf.GraphDef()with open(pb_file_path, "rb") as f:output_graph_def.ParseFromString(f.read()) #rb_ = tf.import_graph_def(output_graph_def, name="")with tf.Session() as sess:tf.global_variables_initializer().run()input_x = sess.graph.get_tensor_by_name("input:0")print (input_x)out_softmax = sess.graph.get_tensor_by_name("softmax:0")print (out_softmax)out_label = sess.graph.get_tensor_by_name("output:0")print (out_label)img = np.array(Image.open(jpg_path).convert('L')) img = transform.resize(img, (H, W, 3))plt.figure("fig1")plt.imshow(img)img = img * (1.0 /255)img_out_softmax = sess.run(out_softmax, feed_dict={input_x:np.reshape(img, [-1, H, W, 3])})print ("img_out_softmax:",img_out_softmax)prediction_labels = np.argmax(img_out_softmax, axis=1)print ("prediction_labels:",prediction_labels)plt.show()recognize("D:/AutoSparePart/ToFinall_Data/0/crop_or_pad010.jpg", "./output/autosparepart.pb")

测试结果:
零件有缺陷:
这里写图片描述
零件无缺陷:
这里写图片描述

这篇关于如何用Tensorflow训练模型成pb文件(二)——基于tfrecord的读取的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1043486

相关文章

Golang的CSP模型简介(最新推荐)

《Golang的CSP模型简介(最新推荐)》Golang采用了CSP(CommunicatingSequentialProcesses,通信顺序进程)并发模型,通过goroutine和channe... 目录前言一、介绍1. 什么是 CSP 模型2. Goroutine3. Channel4. Channe

Java读取InfluxDB数据库的方法详解

《Java读取InfluxDB数据库的方法详解》本文介绍基于Java语言,读取InfluxDB数据库的方法,包括读取InfluxDB的所有数据库,以及指定数据库中的measurement、field、... 首先,创建一个Java项目,用于撰写代码。接下来,配置所需要的依赖;这里我们就选择可用于与Infl

C#读取本地网络配置信息全攻略分享

《C#读取本地网络配置信息全攻略分享》在当今数字化时代,网络已深度融入我们生活与工作的方方面面,对于软件开发而言,掌握本地计算机的网络配置信息显得尤为关键,而在C#编程的世界里,我们又该如何巧妙地读取... 目录一、引言二、C# 读取本地网络配置信息的基础准备2.1 引入关键命名空间2.2 理解核心类与方法

SpringBoot使用Apache POI库读取Excel文件的操作详解

《SpringBoot使用ApachePOI库读取Excel文件的操作详解》在日常开发中,我们经常需要处理Excel文件中的数据,无论是从数据库导入数据、处理数据报表,还是批量生成数据,都可能会遇到... 目录项目背景依赖导入读取Excel模板的实现代码实现代码解析ExcelDemoInfoDTO 数据传输

Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)

《Python基于火山引擎豆包大模型搭建QQ机器人详细教程(2024年最新)》:本文主要介绍Python基于火山引擎豆包大模型搭建QQ机器人详细的相关资料,包括开通模型、配置APIKEY鉴权和SD... 目录豆包大模型概述开通模型付费安装 SDK 环境配置 API KEY 鉴权Ark 模型接口Prompt

Python读取TIF文件的两种方法实现

《Python读取TIF文件的两种方法实现》本文主要介绍了Python读取TIF文件的两种方法实现,包括使用tifffile库和Pillow库逐帧读取TIFF文件,具有一定的参考价值,感兴趣的可以了解... 目录方法 1:使用 tifffile 逐帧读取安装 tifffile:逐帧读取代码:方法 2:使用

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

Andrej Karpathy最新采访:认知核心模型10亿参数就够了,AI会打破教育不公的僵局

夕小瑶科技说 原创  作者 | 海野 AI圈子的红人,AI大神Andrej Karpathy,曾是OpenAI联合创始人之一,特斯拉AI总监。上一次的动态是官宣创办一家名为 Eureka Labs 的人工智能+教育公司 ,宣布将长期致力于AI原生教育。 近日,Andrej Karpathy接受了No Priors(投资博客)的采访,与硅谷知名投资人 Sara Guo 和 Elad G

Retrieval-based-Voice-Conversion-WebUI模型构建指南

一、模型介绍 Retrieval-based-Voice-Conversion-WebUI(简称 RVC)模型是一个基于 VITS(Variational Inference with adversarial learning for end-to-end Text-to-Speech)的简单易用的语音转换框架。 具有以下特点 简单易用:RVC 模型通过简单易用的网页界面,使得用户无需深入了

透彻!驯服大型语言模型(LLMs)的五种方法,及具体方法选择思路

引言 随着时间的发展,大型语言模型不再停留在演示阶段而是逐步面向生产系统的应用,随着人们期望的不断增加,目标也发生了巨大的变化。在短短的几个月的时间里,人们对大模型的认识已经从对其zero-shot能力感到惊讶,转变为考虑改进模型质量、提高模型可用性。 「大语言模型(LLMs)其实就是利用高容量的模型架构(例如Transformer)对海量的、多种多样的数据分布进行建模得到,它包含了大量的先验