本文主要是介绍02 TensorFlow 2.0:前向传播之张量实战,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
你是前世未止的心跳
你是来生胸前的记号
未见分晓
怎么把你忘掉
《千年》
内容覆盖:
- convert to tensor
- reshape
- slices
- broadcast (mechanism)
import tensorflow as tf
print(tf.__version__)import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'import warnings
warnings.filterwarnings('ignore')from tensorflow import keras
from tensorflow.keras import datasets
2.0.0-alpha0 |
1. global constants setting
lr = 1e-3
epochs = 10
2. load data and tensor object 0-1
## load mnist data
# x: [6w, 28, 28]
# y: [6w]
(x,y),_ = datasets.mnist.load_data()
## x: 0-255. => 0-1.
x = tf.convert_to_tensor(x, dtype=tf.float32)/255.
y = tf.convert_to_tensor(y, dtype=tf.int32)
print(x.shape, y.shape)
print(tf.reduce_max(x), tf.reduce_min(x))
print(tf.reduce_max(y), tf.reduce_min(y))
(60000, 28, 28) (60000,) tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor(0.0, shape=(), dtype=float32) tf.Tensor(9, shape=(), dtype=int32) tf.Tensor(0, shape=(), dtype=int32) |
3. split batch
## split batches
# x: [128, 28, 28]
# y: [128, 28, 28]
train_db = tf.data.Dataset.from_tensor_slices((x, y)).batch(128)
train_iter_ = iter(train_db)
sample_ = next(train_iter_)
print('first batch & next batch:', sample_[0].shape, len(sample), sample_[1])
first batch & next batch: (96, 784) 2 tf.Tensor( [3 4 5 6 7 8 9 0 1 2 3 4 8 9 0 1 2 3 4 5 6 7 8 9 6 0 3 4 1 4 0 7 8 7 7 9 0 4 9 4 0 5 8 5 9 8 8 4 0 7 1 3 5 3 1 6 5 3 8 7 3 1 6 8 5 9 2 2 0 9 2 4 6 7 3 1 3 6 6 2 1 2 6 0 7 8 9 2 9 5 1 8 3 5 6 8], shape=(96,), dtype=int32) |
4. parameters init
## parameters init. in order to adapt below GradientTape(),parameters must to be tf.Variable
w1 = tf.Variable(tf.random.truncated_normal([28*28, 256], stddev=0.1)) # truncated normal init
b1 = tf.Variable(tf.zeros([256]))
w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1))
b2 = tf.Variable(tf.zeros([128]))
w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1))
b3 = tf.Variable(tf.zeros([10]))
5. compute(update) loss&gradient for each epoch&batch
## for each epoch
for epoch in range(epochs):## for each batchfor step, (x, y) in enumerate(train_db): # x: [b, 28, 28] => [b, 28*28]x = tf.reshape(x, [-1, 28*28])## compute forward output for each batchwith tf.GradientTape() as tape: # GradientTape below parameters must be tf.Variable# print(x.shape, w1.shape, b1.shape)h1 = x@w1 + b1 # implicitly,b1 ([256]) broadcast_to [b,256]h1 = tf.nn.relu(h1)h2 = h1@w2 + b2 # like aboveh2 = tf.nn.relu(h2)h3 = h2@w3 + b3 # like aboveout = tf.nn.relu(h3)## copute lossy_onehot = tf.one_hot(y, depth=10)loss = tf.reduce_mean(tf.square(y_onehot - out)) # loss is scalar## compute gradientsgrads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])# update parametersw1.assign_sub(lr*grads[0])b1.assign_sub(lr*grads[1])w2.assign_sub(lr*grads[2])b2.assign_sub(lr*grads[3])w3.assign_sub(lr*grads[4])b3.assign_sub(lr*grads[5])if step%100==0:print('epoch/step:', epoch, step,'loss:', float(loss))
epoch/step: 0 0 loss: 0.18603835999965668 epoch/step: 0 100 loss: 0.13570542633533478 epoch/step: 0 200 loss: 0.11861399561166763 epoch/step: 0 300 loss: 0.11322200298309326 epoch/step: 0 400 loss: 0.10488209873437881 epoch/step: 1 0 loss: 0.10238083451986313 epoch/step: 1 100 loss: 0.10504438728094101 epoch/step: 1 200 loss: 0.10291490703821182 epoch/step: 1 300 loss: 0.10242557525634766 epoch/step: 1 400 loss: 0.09785071760416031 epoch/step: 2 0 loss: 0.09843370318412781 epoch/step: 2 100 loss: 0.10121582448482513 epoch/step: 2 200 loss: 0.0993235856294632 epoch/step: 2 300 loss: 0.09929462522268295 epoch/step: 2 400 loss: 0.09492874145507812 epoch/step: 3 0 loss: 0.09640722721815109 epoch/step: 3 100 loss: 0.09940245747566223 epoch/step: 3 200 loss: 0.0968528538942337 epoch/step: 3 300 loss: 0.09739632904529572 epoch/step: 3 400 loss: 0.09268360584974289 epoch/step: 4 0 loss: 0.09469369798898697 epoch/step: 4 100 loss: 0.09802170842885971 epoch/step: 4 200 loss: 0.09442965686321259 epoch/step: 4 300 loss: 0.09557832777500153 epoch/step: 4 400 loss: 0.09028112888336182 epoch/step: 5 0 loss: 0.09288302809000015 epoch/step: 5 100 loss: 0.09671110659837723 epoch/step: 5 200 loss: 0.09200755506753922 epoch/step: 5 300 loss: 0.09379477798938751 epoch/step: 5 400 loss: 0.0879468247294426 epoch/step: 6 0 loss: 0.09075240045785904 epoch/step: 6 100 loss: 0.09545578807592392 epoch/step: 6 200 loss: 0.08961271494626999 epoch/step: 6 300 loss: 0.09208488464355469 epoch/step: 6 400 loss: 0.08578769862651825 epoch/step: 7 0 loss: 0.08858789503574371 epoch/step: 7 100 loss: 0.09415780007839203 epoch/step: 7 200 loss: 0.08701150119304657 epoch/step: 7 300 loss: 0.09043200314044952 epoch/step: 7 400 loss: 0.08375751972198486 epoch/step: 8 0 loss: 0.08612515032291412 epoch/step: 8 100 loss: 0.09273834526538849 epoch/step: 8 200 loss: 0.08432737737894058 epoch/step: 8 300 loss: 0.08866600692272186 epoch/step: 8 400 loss: 0.08179832994937897 epoch/step: 9 0 loss: 0.08383172750473022 epoch/step: 9 100 loss: 0.09108485281467438 epoch/step: 9 200 loss: 0.08158060908317566 epoch/step: 9 300 loss: 0.08686531335115433 epoch/step: 9 400 loss: 0.0796399861574173 |
6. notice
- 训练出来loss为nan或者不变等情况
可能出现梯度爆炸等情况,这里可能需要 change parameter init等,比如这里利用 m u = 0 , s t d = 0.1 mu=0, std=0.1 mu=0,std=0.1截尾normal初始化权重 w w w
参见一些解释:为什么用tensorflow训练网络,出现了loss=nan,accuracy总是一个固定值?
这篇关于02 TensorFlow 2.0:前向传播之张量实战的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!