Google InceptionNet介绍

2024-08-31 06:32
文章标签 介绍 google inceptionnet

本文主要是介绍Google InceptionNet介绍,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Google InceptionNet介绍


1 简要概括

    Google InceptionNet出现在ILSVRC2014年的比赛中(和VGGNet同年),并以较大优势夺得了第一名的成绩,它的top5错误率为6.67%,VGGNet的错误率为7.3%。InceptionNet的最大特点是控制了计算量和参数量的同时提高了网络的性能,它的层数为22,比VGGNet19还深,但是只有15亿次浮点计算和500万的参数量。InceptionNet精心设计的Inception Module也很大程度上提高了参数的利用率。

2 创新点

    网络的参数量很小但是层数很深。参数太多带来两个问题,第一,参数越多,模型越复杂,需要提供模型学习的数据量也越大,但是现在高质量的数据非常昂贵,第二,参数越多,需要的计算资源也越大。InceptionNet中用全局平均池化来替代最后一层的全连接,从而减小参数量,全连接层占据了大量的参数,同时会带来过拟合。

    InceptionNet精心设计的Inception Module很好的提升了模型的性能,同时降低了参数量。一般来说卷积网络要提升表达能力,主要依靠增加输出通道的数量,但是带来的副作用是计算量增大和过拟合。因此InceptionNet采用分支网络堆叠在一起产生较大通道的输出。如下图所示,产生4个网络对前一层的输出做计算,然后将不同卷积大小的输出再通道上堆叠在一起。

可以看到每个分支都采用了1*1的卷积网络,因为这是一个优秀的网络,可以跨通道组织信息,提高网络的表达能力,提供更多的非线性变换,性价比很高。同时网络中的卷积和大小也不一样,可以增加网络对不同尺度的适应性。所以,InceptionNet通过分支的方式增大网络的宽度和深度能够很好的提高网络的性能,避免过拟合。

    InceptionNet有22层,除了最后一层,其中间节点的分类效果也非常好,因此在这个网络中,还使用到了辅助分类节点,即将中间的某一层的输出用作分类,并按一个较小的权重(0.3)加到最终的分类结果中。

    提出并采用了著名的BatchNormalization(BN)。BN是一种非常有用的正则化方法,可以让卷积网络的训练速度加快很多,同时收敛后的分类准确率也可以提高很多。BN就是神经网络在训练的时候对每个Batch数据的内部进行标准化,使输出规范化到N(0,1)的正态分布。BN在某种程度上还起到正则的作用,所以可以减小或者取消Dropout,优化网络结构。同时添加BN后,还需要对超参数做一些调整,增大学速率,加快学习速率的衰减,去除Drought,减轻正则,去除LRN,对样本进行洗牌等等。

    InceptionNet 将一个较大的卷积网络拆分成两个小的卷积网络。比如将 7*7 网络拆分成 1*7 7*1 的卷积网络,这样可以节约大量参数,加速运算并减轻过拟合,同时增加了一层非线性变换拓展了模型的表达能力。


3 网络结构

    InceptionNet的网络结构如下图所示,前面是6层的3*3卷积网络和一层3*3的池化层,后面是3个Inception 模块组,最后面是池化层,logits和softmax等。

其中用到的3个Inception模块组的结构如下图所示。

4 代码实现

 

#%%
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tensorflow as tfslim = tf.contrib.slim
#产生截断的正态分布
trunc_normal = lambda stddev: tf.truncated_normal_initializer(0.0, stddev)#生成Inception V3的卷积部分
def inception_v3_base(inputs, scope=None):
'''
inputs: 图片的tensor
scope: 函数默认的参数环境
'''#保存某些关键节点的信息end_points = {}#定义一个变量的命名空间,可以使变量有相同的名称with tf.variable_scope(scope, 'InceptionV3', [inputs]):#为参数设置默认值with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],stride=1, padding='VALID'):#用小尺寸卷积和1*1的卷积,减小参数,增加非线性# 299 x 299 x 3net = slim.conv2d(inputs, 32, [3, 3], stride=2, scope='Conv2d_1a_3x3')# 149 x 149 x 32net = slim.conv2d(net, 32, [3, 3], scope='Conv2d_2a_3x3')# 147 x 147 x 32net = slim.conv2d(net, 64, [3, 3], padding='SAME', scope='Conv2d_2b_3x3')# 147 x 147 x 64net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_3a_3x3')# 73 x 73 x 64net = slim.conv2d(net, 80, [1, 1], scope='Conv2d_3b_1x1')# 73 x 73 x 80.net = slim.conv2d(net, 192, [3, 3], scope='Conv2d_4a_3x3')# 71 x 71 x 192.net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_5a_3x3')# 35 x 35 x 192.#大小大幅压缩,通道大幅增加,增加抽象性# Inception blockswith slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],stride=1, padding='SAME'):# mixed: 35 x 35 x 256.with tf.variable_scope('Mixed_5b'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')#将4个通道合并,合并维度为3,即在输出通道上合并net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_1: 35 x 35 x 288.with tf.variable_scope('Mixed_5c'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0b_1x1')branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv_1_0c_5x5')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_2: 35 x 35 x 288.with tf.variable_scope('Mixed_5d'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 64, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)#网络每经过一个Inception Mode输出尺寸变化不大,但是特征都被重新精炼了一遍#丰富的卷积和非线性化对网络的提升很大# mixed_3: 17 x 17 x 768.with tf.variable_scope('Mixed_6a'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 384, [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 96, [3, 3], scope='Conv2d_0b_3x3')branch_1 = slim.conv2d(branch_1, 96, [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_1x1')with tf.variable_scope('Branch_2'):branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',scope='MaxPool_1a_3x3')net = tf.concat([branch_0, branch_1, branch_2], 3)# mixed4: 17 x 17 x 768.with tf.variable_scope('Mixed_6b'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')#串联1*7和7*1卷积网络,相当于7*7的卷积网络,参数减小,减小过拟合branch_1 = slim.conv2d(branch_1, 128, [1, 7], scope='Conv2d_0b_1x7')branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_0c_7x1')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 128, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 128, [7, 1], scope='Conv2d_0b_7x1')branch_2 = slim.conv2d(branch_2, 128, [1, 7], scope='Conv2d_0c_1x7')branch_2 = slim.conv2d(branch_2, 128, [7, 1], scope='Conv2d_0d_7x1')branch_2 = slim.conv2d(branch_2, 192, [1, 7], scope='Conv2d_0e_1x7')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_5: 17 x 17 x 768.with tf.variable_scope('Mixed_6c'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 160, [1, 7], scope='Conv2d_0b_1x7')branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_0c_7x1')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_0b_7x1')branch_2 = slim.conv2d(branch_2, 160, [1, 7], scope='Conv2d_0c_1x7')branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_0d_7x1')branch_2 = slim.conv2d(branch_2, 192, [1, 7], scope='Conv2d_0e_1x7')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_6: 17 x 17 x 768.with tf.variable_scope('Mixed_6d'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 160, [1, 7], scope='Conv2d_0b_1x7')branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_0c_7x1')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 160, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_0b_7x1')branch_2 = slim.conv2d(branch_2, 160, [1, 7], scope='Conv2d_0c_1x7')branch_2 = slim.conv2d(branch_2, 160, [7, 1], scope='Conv2d_0d_7x1')branch_2 = slim.conv2d(branch_2, 192, [1, 7], scope='Conv2d_0e_1x7')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_7: 17 x 17 x 768.with tf.variable_scope('Mixed_6e'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 192, [1, 7], scope='Conv2d_0b_1x7')branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_0c_7x1')with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 192, [7, 1], scope='Conv2d_0b_7x1')branch_2 = slim.conv2d(branch_2, 192, [1, 7], scope='Conv2d_0c_1x7')branch_2 = slim.conv2d(branch_2, 192, [7, 1], scope='Conv2d_0d_7x1')branch_2 = slim.conv2d(branch_2, 192, [1, 7], scope='Conv2d_0e_1x7')with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)#存储Mixed_6e辅助模型的分类end_points['Mixed_6e'] = net# mixed_8: 8 x 8 x 1280.with tf.variable_scope('Mixed_7a'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')branch_0 = slim.conv2d(branch_0, 320, [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_3x3')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 192, [1, 1], scope='Conv2d_0a_1x1')branch_1 = slim.conv2d(branch_1, 192, [1, 7], scope='Conv2d_0b_1x7')branch_1 = slim.conv2d(branch_1, 192, [7, 1], scope='Conv2d_0c_7x1')branch_1 = slim.conv2d(branch_1, 192, [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_3x3')with tf.variable_scope('Branch_2'):branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',scope='MaxPool_1a_3x3')net = tf.concat([branch_0, branch_1, branch_2], 3)# mixed_9: 8 x 8 x 2048.with tf.variable_scope('Mixed_7b'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 320, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_0a_1x1')#合并1*3和3*1的2个分支branch_1 = tf.concat([slim.conv2d(branch_1, 384, [1, 3], scope='Conv2d_0b_1x3'),slim.conv2d(branch_1, 384, [3, 1], scope='Conv2d_0b_3x1')], 3)with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 448, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 384, [3, 3], scope='Conv2d_0b_3x3')branch_2 = tf.concat([slim.conv2d(branch_2, 384, [1, 3], scope='Conv2d_0c_1x3'),slim.conv2d(branch_2, 384, [3, 1], scope='Conv2d_0d_3x1')], 3)with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)# mixed_10: 8 x 8 x 2048.with tf.variable_scope('Mixed_7c'):with tf.variable_scope('Branch_0'):branch_0 = slim.conv2d(net, 320, [1, 1], scope='Conv2d_0a_1x1')with tf.variable_scope('Branch_1'):branch_1 = slim.conv2d(net, 384, [1, 1], scope='Conv2d_0a_1x1')branch_1 = tf.concat([slim.conv2d(branch_1, 384, [1, 3], scope='Conv2d_0b_1x3'),slim.conv2d(branch_1, 384, [3, 1], scope='Conv2d_0c_3x1')], 3)with tf.variable_scope('Branch_2'):branch_2 = slim.conv2d(net, 448, [1, 1], scope='Conv2d_0a_1x1')branch_2 = slim.conv2d(branch_2, 384, [3, 3], scope='Conv2d_0b_3x3')branch_2 = tf.concat([slim.conv2d(branch_2, 384, [1, 3], scope='Conv2d_0c_1x3'),slim.conv2d(branch_2, 384, [3, 1], scope='Conv2d_0d_3x1')], 3)with tf.variable_scope('Branch_3'):branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')branch_3 = slim.conv2d(branch_3, 192, [1, 1], scope='Conv2d_0b_1x1')net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)#返回卷积模块的最终输出,和end_pointreturn net, end_pointsdef inception_v3(inputs,num_classes=1000,is_training=True,dropout_keep_prob=0.8,prediction_fn=slim.softmax,spatial_squeeze=True,reuse=None,scope='InceptionV3'):
'''
构建 Inception V3网络
inputs 输入图片的tensor
num_classes 预测类别数
is_traininf 是否训练,对drop_out和BN有影响
dropout_keep_prob drop比例
prediction_fn 分类函数
spatial_squeeze 是否对输出去除1的维度,5*3*1变成 5*3
reuse 网络参数复用信息
scope 变量空间
'''with tf.variable_scope(scope, 'InceptionV3', [inputs, num_classes],reuse=reuse) as scope:with slim.arg_scope([slim.batch_norm, slim.dropout],is_training=is_training):#拿到卷积网络的输出net, end_points = inception_v3_base(inputs, scope=scope)# Auxiliary Head logits 设置默认参数with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],stride=1, padding='SAME'):aux_logits = end_points['Mixed_6e']with tf.variable_scope('AuxLogits'):#17*17*768aux_logits = slim.avg_pool2d(aux_logits, [5, 5], stride=3, padding='VALID',scope='AvgPool_1a_5x5')#5*5*768aux_logits = slim.conv2d(aux_logits, 128, [1, 1],scope='Conv2d_1b_1x1')# Shape of feature map before the final layer.#5*5*768aux_logits = slim.conv2d(aux_logits, 768, [5,5],weights_initializer=trunc_normal(0.01),padding='VALID', scope='Conv2d_2a_5x5')#1*1*768aux_logits = slim.conv2d(aux_logits, num_classes, [1, 1], activation_fn=None,normalizer_fn=None, weights_initializer=trunc_normal(0.001),scope='Conv2d_2b_1x1')#1*1*1000 去除前两个为1的维度if spatial_squeeze:aux_logits = tf.squeeze(aux_logits, [1, 2], name='SpatialSqueeze')#存储辅助分类节点end_points['AuxLogits'] = aux_logits# Final pooling and predictionwith tf.variable_scope('Logits'):net = slim.avg_pool2d(net, [8, 8], padding='VALID',scope='AvgPool_1a_8x8')# 1 x 1 x 2048net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')#存储辅助分类节点end_points['PreLogits'] = net# 2048logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,normalizer_fn=None, scope='Conv2d_1c_1x1')if spatial_squeeze:logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')# 1000end_points['Logits'] = logitsend_points['Predictions'] = prediction_fn(logits, scope='Predictions')return logits, end_pointsdef inception_v3_arg_scope(weight_decay=0.00004,stddev=0.1,batch_norm_var_collection='moving_vars'):
'''
L2 的权重衰减为0.00004
权重初始化的方差为0.1'''batch_norm_params = {# 定义batch normalization(标准化)的参数字典'decay': 0.9997, # 定义参数衰减系数'epsilon': 0.001,'updates_collections': tf.GraphKeys.UPDATE_OPS,'variables_collections': {'beta': None,'gamma': None,'moving_mean': [batch_norm_var_collection],'moving_variance': [batch_norm_var_collection],}}# 使用slim.arg_scope后就不需要每次都重复设置参数了,只需要在有修改时设置with slim.arg_scope([slim.conv2d, slim.fully_connected],weights_regularizer=slim.l2_regularizer(weight_decay)):# 嵌套一个slim.arg_scope对卷积层生成函数slim.conv2d的几个参数赋予默认值with slim.arg_scope([slim.conv2d],weights_initializer=trunc_normal(stddev),activation_fn=tf.nn.relu,normalizer_fn=slim.batch_norm,# 标准化器的参数设置为前面定义的batch_norm_paramsnormalizer_params=batch_norm_params) as sc:return scfrom datetime import datetime
import math
import time
def time_tensorflow_run(session, target, info_string):num_steps_burn_in = 10total_duration = 0.0total_duration_squared = 0.0for i in range(num_batches + num_steps_burn_in):start_time = time.time()_ = session.run(target)duration = time.time() - start_timeif i >= num_steps_burn_in:if not i % 10:print ('%s: step %d, duration = %.3f' %(datetime.now(), i - num_steps_burn_in, duration))total_duration += durationtotal_duration_squared += duration * durationmn = total_duration / num_batchesvr = total_duration_squared / num_batches - mn * mnsd = math.sqrt(vr)print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %(datetime.now(), info_string, num_batches, mn, sd))batch_size = 32
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
with slim.arg_scope(inception_v3_arg_scope()):logits, end_points = inception_v3(inputs, is_training=False)init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)  
num_batches=100
time_tensorflow_run(sess, logits, "Forward")

5 参考文献

[1]黄文坚,唐源.TensorFlow实战[M].北京:电子工业出版社,2017.

[2] https://arxiv.org/abs/1512.00567

[3]https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3_test.py




这篇关于Google InceptionNet介绍的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1123148

相关文章

四种Flutter子页面向父组件传递数据的方法介绍

《四种Flutter子页面向父组件传递数据的方法介绍》在Flutter中,如果父组件需要调用子组件的方法,可以通过常用的四种方式实现,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录方法 1:使用 GlobalKey 和 State 调用子组件方法方法 2:通过回调函数(Callb

Python进阶之Excel基本操作介绍

《Python进阶之Excel基本操作介绍》在现实中,很多工作都需要与数据打交道,Excel作为常用的数据处理工具,一直备受人们的青睐,本文主要为大家介绍了一些Python中Excel的基本操作,希望... 目录概述写入使用 xlwt使用 XlsxWriter读取修改概述在现实中,很多工作都需要与数据打交

java脚本使用不同版本jdk的说明介绍

《java脚本使用不同版本jdk的说明介绍》本文介绍了在Java中执行JavaScript脚本的几种方式,包括使用ScriptEngine、Nashorn和GraalVM,ScriptEngine适用... 目录Java脚本使用不同版本jdk的说明1.使用ScriptEngine执行javascript2.

Python实现NLP的完整流程介绍

《Python实现NLP的完整流程介绍》这篇文章主要为大家详细介绍了Python实现NLP的完整流程,文中的示例代码讲解详细,具有一定的借鉴价值,感兴趣的小伙伴可以跟随小编一起学习一下... 目录1. 编程安装和导入必要的库2. 文本数据准备3. 文本预处理3.1 小写化3.2 分词(Tokenizatio

性能测试介绍

性能测试是一种测试方法,旨在评估系统、应用程序或组件在现实场景中的性能表现和可靠性。它通常用于衡量系统在不同负载条件下的响应时间、吞吐量、资源利用率、稳定性和可扩展性等关键指标。 为什么要进行性能测试 通过性能测试,可以确定系统是否能够满足预期的性能要求,找出性能瓶颈和潜在的问题,并进行优化和调整。 发现性能瓶颈:性能测试可以帮助发现系统的性能瓶颈,即系统在高负载或高并发情况下可能出现的问题

水位雨量在线监测系统概述及应用介绍

在当今社会,随着科技的飞速发展,各种智能监测系统已成为保障公共安全、促进资源管理和环境保护的重要工具。其中,水位雨量在线监测系统作为自然灾害预警、水资源管理及水利工程运行的关键技术,其重要性不言而喻。 一、水位雨量在线监测系统的基本原理 水位雨量在线监测系统主要由数据采集单元、数据传输网络、数据处理中心及用户终端四大部分构成,形成了一个完整的闭环系统。 数据采集单元:这是系统的“眼睛”,

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

图神经网络模型介绍(1)

我们将图神经网络分为基于谱域的模型和基于空域的模型,并按照发展顺序详解每个类别中的重要模型。 1.1基于谱域的图神经网络         谱域上的图卷积在图学习迈向深度学习的发展历程中起到了关键的作用。本节主要介绍三个具有代表性的谱域图神经网络:谱图卷积网络、切比雪夫网络和图卷积网络。 (1)谱图卷积网络 卷积定理:函数卷积的傅里叶变换是函数傅里叶变换的乘积,即F{f*g}

C++——stack、queue的实现及deque的介绍

目录 1.stack与queue的实现 1.1stack的实现  1.2 queue的实现 2.重温vector、list、stack、queue的介绍 2.1 STL标准库中stack和queue的底层结构  3.deque的简单介绍 3.1为什么选择deque作为stack和queue的底层默认容器  3.2 STL中对stack与queue的模拟实现 ①stack模拟实现

Mysql BLOB类型介绍

BLOB类型的字段用于存储二进制数据 在MySQL中,BLOB类型,包括:TinyBlob、Blob、MediumBlob、LongBlob,这几个类型之间的唯一区别是在存储的大小不同。 TinyBlob 最大 255 Blob 最大 65K MediumBlob 最大 16M LongBlob 最大 4G