关于tensorflow2.0-alpha0版本,尝鲜

2023-12-06 09:58

本文主要是介绍关于tensorflow2.0-alpha0版本,尝鲜,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

关于主要更新的一些情况,可以参看这篇博客
我在这里只是想要跑一个mnist的小demo
主要代码如下(官网代码)

from __future__ import absolute_import, division, print_function
# -*- coding: utf-8 -*-
#  !pip install tensorflow-gpu==2.0.0-alpha0 ## 这行代码是安装的,如果已经安装,可以注释
import tensorflow_datasets as tfds  # 这个包需要单独安装 pip install tensorflow_datasets
import tensorflow as tffrom tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model"""Load and prepare the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). Convert the samples from integers to floating-point numbers:"""
dataset, info = tfds.load('mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = dataset['train'], dataset['test']def convert_types(image, label):image = tf.cast(image, tf.float32)image /= 255return image, labelmnist_train = mnist_train.map(convert_types).shuffle(10000).batch(32)
mnist_test = mnist_test.map(convert_types).batch(32)"""Build the `tf.keras` model using the Keras 
[model subclassing API](https://www.tensorflow.org/guide/keras#model_subclassing):"""
class MyModel(Model):def __init__(self):super(MyModel, self).__init__()self.conv1 = Conv2D(32, 3, activation='relu')self.flatten = Flatten()self.d1 = Dense(128, activation='relu')self.d2 = Dense(10, activation='softmax')def call(self, x):x = self.conv1(x)x = self.flatten(x)x = self.d1(x)return self.d2(x)model = MyModel()"""Choose an optimizer and loss function for training:"""
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()"""Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result."""
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')"""Train the model using `tf.GradientTape`:"""
@tf.function  # 这个装饰器的作用,目前不是很清楚
def train_step(image, label):with tf.GradientTape() as tape:predictions = model(image)loss = loss_object(label, predictions)gradients = tape.gradient(loss, model.trainable_variables)optimizer.apply_gradients(zip(gradients, model.trainable_variables))train_loss(loss)train_accuracy(label, predictions)"""Now test the model:"""
@tf.function
def test_step(image, label):predictions = model(image)t_loss = loss_object(label, predictions)test_loss(t_loss)test_accuracy(label, predictions)EPOCHS = 5for epoch in range(EPOCHS):for image, label in mnist_train:train_step(image, label)for test_image, test_label in mnist_test:test_step(test_image, test_label)template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'print (template.format(epoch+1,train_loss.result(),train_accuracy.result()*100,test_loss.result(),test_accuracy.result()*100))"""The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the [TensorFlow tutorials](https://www.tensorflow.org/alpha/tutorials/keras)."""

输出结果:

D:\Anaconda3\envs\tensorflow\python.exe E:/pythonProgram/Tensorflow/tensorflow2/advanced.py
Downloading / extracting dataset mnist (11.06 MiB) to C:\Users\Administrator\tensorflow_datasets\mnist\1.0.0...
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]Dl Completed...:   0%|          | 0/1 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]Dl Completed...:   0%|          | 0/2 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]Dl Completed...:   0%|          | 0/3 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]Dl Completed...:   0%|          | 0/4 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]Dl Completed...:   0%|          | 0/4 [00:01<?, ? url/s]
Dl Size...:   0%|          | 0/9 [00:01<?, ? MiB/s]Dl Completed...:   0%|          | 0/4 [00:01<?, ? url/s]
Dl Size...:   0%|          | 0/9 [00:01<?, ? MiB/s]Dl Completed...:  25%|██▌       | 1/4 [00:01<00:05,  1.89s/ url]
Dl Size...:   0%|          | 0/9 [00:01<?, ? MiB/s]Dl Completed...:  25%|██▌       | 1/4 [00:02<00:05,  1.89s/ url]
Dl Size...:   0%|          | 0/9 [00:02<?, ? MiB/s]Extraction completed...:   0%|          | 0/1 [00:02<?, ? file/s]Dl Completed...:  25%|██▌       | 1/4 [00:02<00:05,  1.89s/ url]
Dl Size...:   0%|          | 0/9 [00:02<?, ? MiB/s]Dl Completed...:  25%|██▌       | 1/4 [00:02<00:05,  1.89s/ url]
Dl Size...:   0%|          | 0/10 [00:02<?, ? MiB/s]Dl Completed...:  25%|██▌       | 1/4 [00:02<00:05,  1.89s/ url]
Dl Size...:   0%|          | 0/10 [00:02<?, ? MiB/s]Dl Completed...:  50%|█████     | 2/4 [00:02<00:03,  1.65s/ url]
Dl Size...:   0%|          | 0/10 [00:02<?, ? MiB/s]Dl Completed...:  50%|█████     | 2/4 [00:03<00:03,  1.65s/ url]
Dl Size...:   0%|          | 0/10 [00:03<?, ? MiB/s]Extraction completed...:  50%|█████     | 1/2 [00:03<00:02,  2.12s/ file]Dl Completed...:  50%|█████     | 2/4 [00:03<00:03,  1.65s/ url]
Dl Size...:   0%|          | 0/10 [00:03<?, ? MiB/s]Extraction completed...: 100%|██████████| 2/2 [00:03<00:00,  1.83s/ file]
Dl Completed...:  50%|█████     | 2/4 [00:30<00:03,  1.65s/ url]
Dl Size...:  10%|█         | 1/10 [00:30<04:31, 30.14s/ MiB]Extraction completed...: 100%|██████████| 2/2 [00:30<00:00,  1.83s/ file]
Dl Completed...:  50%|█████     | 2/4 [01:00<00:03,  1.65s/ url]
Dl Size...:  20%|██        | 2/10 [01:00<04:01, 30.18s/ MiB]Extraction completed...: 100%|██████████| 2/2 [01:00<00:00,  1.83s/ file]
Dl Completed...:  50%|█████     | 2/4 [01:18<00:03,  1.65s/ url]
Dl Size...:  30%|███       | 3/10 [01:18<03:05, 26.51s/ MiB]Dl Completed...:  75%|███████▌  | 3/4 [01:18<00:23, 23.78s/ url]
Dl Size...:  30%|███       | 3/10 [01:18<03:05, 26.51s/ MiB]Dl Completed...:  75%|███████▌  | 3/4 [01:18<00:23, 23.78s/ url]
Dl Size...:  30%|███       | 3/10 [01:18<03:05, 26.51s/ MiB]Extraction completed...:  67%|██████▋   | 2/3 [01:18<00:01,  1.83s/ file]Dl Completed...:  75%|███████▌  | 3/4 [01:19<00:23, 23.78s/ url]
Dl Size...:  30%|███       | 3/10 [01:19<03:05, 26.51s/ MiB]Extraction completed...: 100%|██████████| 3/3 [01:19<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [02:01<00:23, 23.78s/ url]
Dl Size...:  40%|████      | 4/10 [02:01<03:08, 31.48s/ MiB]Extraction completed...: 100%|██████████| 3/3 [02:01<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [02:44<00:23, 23.78s/ url]
Dl Size...:  50%|█████     | 5/10 [02:44<02:55, 35.02s/ MiB]Extraction completed...: 100%|██████████| 3/3 [02:44<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [03:32<00:23, 23.78s/ url]
Dl Size...:  60%|██████    | 6/10 [03:32<02:35, 38.89s/ MiB]Extraction completed...: 100%|██████████| 3/3 [03:32<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [04:10<00:23, 23.78s/ url]
Dl Size...:  70%|███████   | 7/10 [04:10<01:56, 38.68s/ MiB]Extraction completed...: 100%|██████████| 3/3 [04:10<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [04:57<00:23, 23.78s/ url]
Dl Size...:  80%|████████  | 8/10 [04:57<01:22, 41.09s/ MiB]Extraction completed...: 100%|██████████| 3/3 [04:57<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [05:37<00:23, 23.78s/ url]
Dl Size...:  90%|█████████ | 9/10 [05:37<00:40, 40.71s/ MiB]Extraction completed...: 100%|██████████| 3/3 [05:37<00:00, 24.03s/ file]
Dl Completed...:  75%|███████▌  | 3/4 [06:24<00:23, 23.78s/ url]
Dl Size...: 100%|██████████| 10/10 [06:24<00:00, 42.62s/ MiB]Dl Completed...: 100%|██████████| 4/4 [06:45<00:00, 114.80s/ url]
Dl Size...: 100%|██████████| 10/10 [06:45<00:00, 42.62s/ MiB]Dl Completed...: 100%|██████████| 4/4 [06:45<00:00, 114.80s/ url]
Dl Size...: 100%|██████████| 10/10 [06:45<00:00, 42.62s/ MiB]Extraction completed...:  75%|███████▌  | 3/4 [06:45<00:24, 24.03s/ file]Dl Completed...: 100%|██████████| 4/4 [06:49<00:00, 114.80s/ url]
Dl Size...: 100%|██████████| 10/10 [06:49<00:00, 42.62s/ MiB]Extraction completed...: 100%|██████████| 4/4 [06:49<00:00, 115.90s/ file]60000 examples [00:20, 2938.42 examples/s]
Shuffling...:   0%|          | 0/10 [00:00<?, ? shard/s]WARNING: Logging before flag parsing goes to stderr.
W0330 15:32:55.947226  7452 deprecation.py:323] From D:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:249: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 109084.63 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  59%|█████▉    | 3549/6000 [00:00<00:00, 35488.00 examples/s]
Shuffling...:  10%|█         | 1/10 [00:00<00:05,  1.63 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 146333.35 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  58%|█████▊    | 3506/6000 [00:00<00:00, 35058.03 examples/s]
Shuffling...:  20%|██        | 2/10 [00:01<00:05,  1.58 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 127651.34 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  58%|█████▊    | 3500/6000 [00:00<00:00, 34997.95 examples/s]
Shuffling...:  30%|███       | 3/10 [00:01<00:04,  1.73 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 122441.77 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  57%|█████▋    | 3435/6000 [00:00<00:00, 34348.07 examples/s]
Shuffling...:  40%|████      | 4/10 [00:02<00:03,  1.79 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 139527.20 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  58%|█████▊    | 3509/6000 [00:00<00:00, 35088.03 examples/s]
Shuffling...:  50%|█████     | 5/10 [00:02<00:02,  1.85 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 153838.49 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  60%|█████▉    | 3581/6000 [00:00<00:00, 35807.99 examples/s]
Shuffling...:  60%|██████    | 6/10 [00:03<00:02,  1.81 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 130428.01 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  60%|█████▉    | 3588/6000 [00:00<00:00, 35877.90 examples/s]
Shuffling...:  70%|███████   | 7/10 [00:03<00:01,  1.89 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 124992.92 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  60%|██████    | 3621/6000 [00:00<00:00, 36207.96 examples/s]
Shuffling...:  80%|████████  | 8/10 [00:04<00:01,  1.92 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 166655.57 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  60%|█████▉    | 3582/6000 [00:00<00:00, 35817.98 examples/s]
Shuffling...:  90%|█████████ | 9/10 [00:04<00:00,  1.97 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 6000 examples [00:00, 149992.10 examples/s]
Writing...:   0%|          | 0/6000 [00:00<?, ? examples/s]
Writing...:  59%|█████▊    | 3521/6000 [00:00<00:00, 35207.94 examples/s]
Shuffling...: 100%|██████████| 10/10 [00:05<00:00,  1.94 shard/s]
10000 examples [00:03, 3030.13 examples/s]
Shuffling...:   0%|          | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 10000 examples [00:00, 181807.72 examples/s]
Writing...:   0%|          | 0/10000 [00:00<?, ? examples/s]
Writing...:  34%|███▍      | 3407/10000 [00:00<00:00, 34068.08 examples/s]
Writing...:  69%|██████▉   | 6922/10000 [00:00<00:00, 34385.01 examples/s]
Shuffling...: 100%|██████████| 1/1 [00:00<00:00,  1.32 shard/s]
2019-03-30 15:33:07.617894: W .\tensorflow/core/framework/model.h:202] Encountered a stop event that was not preceded by a start event.
Epoch 1, Loss: 0.1331276297569275, Accuracy: 95.9949951171875, Test Loss: 0.0622599795460701, Test Accuracy: 97.94999694824219
2019-03-30 15:34:36.883000: W .\tensorflow/core/framework/model.h:202] Encountered a stop event that was not preceded by a start event.
Epoch 2, Loss: 0.087999127805233, Accuracy: 97.3316650390625, Test Loss: 0.05821216478943825, Test Accuracy: 98.04999542236328
2019-03-30 15:35:55.022469: W .\tensorflow/core/framework/model.h:202] Encountered a stop event that was not preceded by a start event.
Epoch 3, Loss: 0.06530821323394775, Accuracy: 98.00444030761719, Test Loss: 0.05837463214993477, Test Accuracy: 98.15333557128906
2019-03-30 15:37:11.460841: W .\tensorflow/core/framework/model.h:202] Encountered a stop event that was not preceded by a start event.
Epoch 4, Loss: 0.052207402884960175, Accuracy: 98.39083862304688, Test Loss: 0.06129618361592293, Test Accuracy: 98.15250396728516
2019-03-30 15:38:27.955216: W .\tensorflow/core/framework/model.h:202] Encountered a stop event that was not preceded by a start event.
Epoch 5, Loss: 0.04351149871945381, Accuracy: 98.65533447265625, Test Loss: 0.06512587517499924, Test Accuracy: 98.13200378417969Process finished with exit code 0

以上代码相比较于tensorflow1.x来说,较为清爽

  • 数据流在网络中的流向,可以像PyTorch那样,直接加断点,查看x在经过各个网络层的前后value和shape的变化
  • 有一点我想说明的是,以上代码中的 train_step()test_step() 函数,在pycharm中,不可以通过加断点的方式,进入查看,具体运算情况(比较纳闷)。
  • 再有,就是代码中的train_loss,train_accuracy,test_loss, test_accuracy这四个对象,很神奇,我甚至没有看到它们的运算过程,最后竟然可以输出train_loss.result()

关于tensorflow2.0的初体验,就到这里。
目前来看,我还是倾向于PyTorch.

这篇关于关于tensorflow2.0-alpha0版本,尝鲜的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/461420

相关文章

浅谈配置MMCV环境,解决报错,版本不匹配问题

《浅谈配置MMCV环境,解决报错,版本不匹配问题》:本文主要介绍浅谈配置MMCV环境,解决报错,版本不匹配问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录配置MMCV环境,解决报错,版本不匹配错误示例正确示例总结配置MMCV环境,解决报错,版本不匹配在col

Linux卸载自带jdk并安装新jdk版本的图文教程

《Linux卸载自带jdk并安装新jdk版本的图文教程》在Linux系统中,有时需要卸载预装的OpenJDK并安装特定版本的JDK,例如JDK1.8,所以本文给大家详细介绍了Linux卸载自带jdk并... 目录Ⅰ、卸载自带jdkⅡ、安装新版jdkⅠ、卸载自带jdk1、输入命令查看旧jdkrpm -qa

Tomcat版本与Java版本的关系及说明

《Tomcat版本与Java版本的关系及说明》:本文主要介绍Tomcat版本与Java版本的关系及说明,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录Tomcat版本与Java版本的关系Tomcat历史版本对应的Java版本Tomcat支持哪些版本的pythonJ

IDEA中Git版本回退的两种实现方案

《IDEA中Git版本回退的两种实现方案》作为开发者,代码版本回退是日常高频操作,IntelliJIDEA集成了强大的Git工具链,但面对reset和revert两种核心回退方案,许多开发者仍存在选择... 目录一、版本回退前置知识二、Reset方案:整体改写历史1、IDEA图形化操作(推荐)1.1、查看提

JDK多版本共存并自由切换的操作指南(本文为JDK8和JDK17)

《JDK多版本共存并自由切换的操作指南(本文为JDK8和JDK17)》本文介绍了如何在Windows系统上配置多版本JDK(以JDK8和JDK17为例),并通过图文结合的方式给大家讲解了详细步骤,具有... 目录第一步 下载安装JDK第二步 配置环境变量第三步 切换JDK版本并验证可能遇到的问题前提:公司常

nvm如何切换与管理node版本

《nvm如何切换与管理node版本》:本文主要介绍nvm如何切换与管理node版本问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录nvm切换与管理node版本nvm安装nvm常用命令总结nvm切换与管理node版本nvm适用于多项目同时开发,然后项目适配no

Mybatis从3.4.0版本到3.5.7版本的迭代方法实现

《Mybatis从3.4.0版本到3.5.7版本的迭代方法实现》本文主要介绍了Mybatis从3.4.0版本到3.5.7版本的迭代方法实现,包括主要的功能增强、不兼容的更改和修复的错误,具有一定的参考... 目录一、3.4.01、主要的功能增强2、selectCursor example3、不兼容的更改二、

pytorch+torchvision+python版本对应及环境安装

《pytorch+torchvision+python版本对应及环境安装》本文主要介绍了pytorch+torchvision+python版本对应及环境安装,安装过程中需要注意Numpy版本的降级,... 目录一、版本对应二、安装命令(pip)1. 版本2. 安装全过程3. 命令相关解释参考文章一、版本对

springboot3.4和mybatis plus的版本问题的解决

《springboot3.4和mybatisplus的版本问题的解决》本文主要介绍了springboot3.4和mybatisplus的版本问题的解决,主要由于SpringBoot3.4与MyBat... 报错1:spring-boot-starter/3.4.0/spring-boot-starter-

mac安装nvm(node.js)多版本管理实践步骤

《mac安装nvm(node.js)多版本管理实践步骤》:本文主要介绍mac安装nvm(node.js)多版本管理的相关资料,NVM是一个用于管理多个Node.js版本的命令行工具,它允许开发者在... 目录NVM功能简介MAC安装实践一、下载nvm二、安装nvm三、安装node.js总结NVM功能简介N