本文主要是介绍动手学深度学习学习笔记tf2.0版(5.8 网络中的网络(NiN)),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
NiN学习笔记
github代码地址:https://github.com/taichuai/d2l_zh_tensorflow2.0
import tensorflow as tf
print(tf.__version__)for gpu in tf.config.experimental.list_physical_devices('GPU'):tf.config.experimental.set_memory_growth(gpu, True)
def nin_block(num_channels, kernel_size, strides, padding):blk = tf.keras.models.Sequential()blk.add(tf.keras.layers.Conv2D(num_channels, kernel_size,strides=strides, padding=padding, activation='relu')) blk.add(tf.keras.layers.Conv2D(num_channels, kernel_size=1,activation='relu')) blk.add(tf.keras.layers.Conv2D(num_channels, kernel_size=1,activation='relu')) return blk
net = tf.keras.models.Sequential()
net.add(nin_block(96, kernel_size=11, strides=4, padding='valid'))
net.add(tf.keras.layers.MaxPool2D(pool_size=3, strides=2))
net.add(nin_block(256, kernel_size=5, strides=1, padding='same'))
net.add(tf.keras.layers.MaxPool2D(pool_size=3, strides=2))
net.add(nin_block(384, kernel_size=3, strides=1, padding='same'))
net.add(tf.keras.layers.MaxPool2D(pool_size=3, strides=2))
net.add(tf.keras.layers.Dropout(0.5))
net.add(nin_block(10, kernel_size=3, strides=1, padding='same'))
net.add(tf.keras.layers.GlobalAveragePooling2D())
net.add(tf.keras.layers.Flatten())
X = tf.random.uniform((1,224,224,1))
for blk in net.layers:X = blk(X)print(blk.name, 'output shape:\t', X.shape)
可以得到
获取数据和训练模型
我们依然使用Fashion-MNIST数据集来训练模型。NiN的训练与AlexNet和VGG的类似,注意如果使用 Adam 优化器,学习率先使用较小进行训练,看看效果,较大了可能无法收敛(这里取 lr=1e-6)
# 获取数据
from tensorflow.keras.datasets import fashion_mnist
import matplotlib.pyplot as plt(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()# 数据预处理
def data_scale(x, y):x = tf.cast(x, tf.float32)x = x / 255.0x = tf.reshape(x, (x.shape[0], x.shape[1], 1))x = tf.image.resize_with_pad(image=x, target_height=224,target_width=224)return x, y
# 由于笔记本训练太慢了,使用1000条数据,跑一下先,算力够的可以直接使用全部数据更加明显
train_db = tf.data.Dataset.from_tensor_slices((x_train[0:5000],y_train[0:5000])).shuffle(20).map(data_scale).batch(32)
test_db = tf.data.Dataset.from_tensor_slices((x_test[0:1000],y_test[0:1000])).shuffle(20).map(data_scale).batch(32)
# 定义优化器和损失函数
optimizer = tf.keras.optimizers.Adam(lr=1e-5)
loss = tf.keras.losses.sparse_categorical_crossentropy
net.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
net.fit_generator(train_db, epochs=5, validation_data=test_db) # 这里就不跑太多轮了,有机器可以自己调参跑个好的结果
net.summary()
# 可以像alexnet一样,打印中间特征层看一下
X = next(iter(train_db))[0][0]def show(X):X_ = tf.squeeze(X)plt.imshow(X_)plt.figure(figsize=(5,5))plt.show()X = tf.expand_dims(X, axis=0)
# 打印前 8 层的部分特征图
for blk in net.layers[0:8]:print(blk.name,'itput shape:\t',X.shape)show(X[0,:,:,0])X = blk(X)print(blk.name, 'output shape:\t', X.shape)for i in range(3):show(X[0,:,:,i])
这篇关于动手学深度学习学习笔记tf2.0版(5.8 网络中的网络(NiN))的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!