2.9-tf2-数据增强-tf_flowers

2023-11-07 19:51
文章标签 数据 tf 2.9 增强 flowers tf2

本文主要是介绍2.9-tf2-数据增强-tf_flowers,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

文章目录

    • 1.导入包
    • 2.加载数据
    • 3.数据预处理
    • 4.数据增强
    • 5.预处理层的两种方法
    • 6.把与处理层用在数据集上
    • 7.训练模型
    • 8.自定义数据增强
    • 9.Using tf.image

tf_flowers数据集
在这里插入图片描述

1.导入包

import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfdsfrom tensorflow.keras import layers
from tensorflow.keras.datasets import mnist

2.加载数据

(train_ds, val_ds, test_ds), metadata = tfds.load('tf_flowers',split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],with_info=True,as_supervised=True,
)
#The flowers dataset has five classes.num_classes = metadata.features['label'].num_classes
print(num_classes)
5

复现数据:

#Let's retrieve an image from the dataset and use it to demonstrate data augmentation.get_label_name = metadata.features['label'].int2strimage, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))

在这里插入图片描述

3.数据预处理

Use Keras preprocessing layers

#Resizing and rescaling
#You can use preprocessing layers to resize your images to a consistent shape, and to rescale pixel values.IMG_SIZE = 180resize_and_rescale = tf.keras.Sequential([layers.experimental.preprocessing.Resizing(IMG_SIZE, IMG_SIZE),layers.experimental.preprocessing.Rescaling(1./255)
])
#Note: the rescaling layer above standardizes pixel values to [0,1]. If instead you wanted [-1,1], you would write Rescaling(1./127.5, offset=-1).
result = resize_and_rescale(image)
_ = plt.imshow(result)

在这里插入图片描述

#You can verify the pixels are in [0-1].print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
Min and max pixel values: 0.0 1.0

4.数据增强

#Data augmentation
#You can use preprocessing layers for data augmentation as well.#Let's create a few preprocessing layers and apply them repeatedly to the same image.data_augmentation = tf.keras.Sequential([layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),layers.experimental.preprocessing.RandomRotation(0.2),
])
# Add the image to a batch
image = tf.expand_dims(image, 0)
plt.figure(figsize=(10, 10))
for i in range(9):augmented_image = data_augmentation(image)ax = plt.subplot(3, 3, i + 1)plt.imshow(augmented_image[0])plt.axis("off")

在这里插入图片描述

#There are a variety of preprocessing layers you can use for data augmentation including layers.RandomContrast, layers.RandomCrop, layers.RandomZoom, and others.

5.预处理层的两种方法

There are two ways you can use these preprocessing layers, with important tradeoffs.

  1. 第一种方法
Option 1: Make the preprocessing layers part of your model
model = tf.keras.Sequential([resize_and_rescale,data_augmentation,layers.Conv2D(16, 3, padding='same', activation='relu'),layers.MaxPooling2D(),# Rest of your model
])
There are two important points to be aware of in this case:Data augmentation will run on-device, synchronously with the rest of your layers, and benefit from GPU acceleration.When you export your model using model.save, the preprocessing layers will be saved along with the rest of your model. If you later deploy this model, it will automatically standardize images (according to the configuration of your layers). This can save you from the effort of having to reimplement that logic server-side.Note: Data augmentation is inactive at test time so input images will only be augmented during calls to model.fit (not model.evaluate or model.predict).
  1. 第二种方法:
#Option 2: Apply the preprocessing layers to your dataset
aug_ds = train_ds.map(lambda x, y: (resize_and_rescale(x, training=True), y))
With this approach, you use Dataset.map to create a dataset that yields batches of augmented images. In this case:Data augmentation will happen asynchronously on the CPU, and is non-blocking. You can overlap the training of your model on the GPU with data preprocessing, using Dataset.prefetch, shown below.
In this case the prepreprocessing layers will not be exported with the model when you call model.save. You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export.

6.把与处理层用在数据集上

Configure the train, validation, and test datasets with the preprocessing layers you created above. You will also configure the datasets for performance, using parallel reads and buffered prefetching to yield batches from disk without I/O become blocking. 
Note: data augmentation should only be applied to the training set.
batch_size = 32
AUTOTUNE = tf.data.experimental.AUTOTUNEdef prepare(ds, shuffle=False, augment=False):# Resize and rescale all datasetsds = ds.map(lambda x, y: (resize_and_rescale(x), y), num_parallel_calls=AUTOTUNE)if shuffle:ds = ds.shuffle(1000)# Batch all datasetsds = ds.batch(batch_size)# Use data augmentation only on the training setif augment:ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE)# Use buffered prefecting on all datasetsreturn ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)

7.训练模型

model = tf.keras.Sequential([layers.Conv2D(16, 3, padding='same', activation='relu'),layers.MaxPooling2D(),layers.Conv2D(32, 3, padding='same', activation='relu'),layers.MaxPooling2D(),layers.Conv2D(64, 3, padding='same', activation='relu'),layers.MaxPooling2D(),layers.Flatten(),layers.Dense(128, activation='relu'),layers.Dense(num_classes)
])
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
epochs=5
history = model.fit(train_ds,validation_data=val_ds,epochs=epochs
)
Epoch 1/5
92/92 [==============================] - 30s 315ms/step - loss: 1.5078 - accuracy: 0.3428 - val_loss: 1.0809 - val_accuracy: 0.6240
Epoch 2/5
92/92 [==============================] - 28s 303ms/step - loss: 1.0781 - accuracy: 0.5724 - val_loss: 0.9762 - val_accuracy: 0.6322
Epoch 3/5
92/92 [==============================] - 28s 295ms/step - loss: 1.0083 - accuracy: 0.5900 - val_loss: 0.9570 - val_accuracy: 0.6376
Epoch 4/5
92/92 [==============================] - 28s 300ms/step - loss: 0.9537 - accuracy: 0.6116 - val_loss: 0.9081 - val_accuracy: 0.6485
Epoch 5/5
92/92 [==============================] - 28s 301ms/step - loss: 0.8816 - accuracy: 0.6525 - val_loss: 0.8353 - val_accuracy: 0.6594
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
12/12 [==============================] - 1s 83ms/step - loss: 0.8226 - accuracy: 0.6567
Accuracy 0.6566757559776306

8.自定义数据增强

First, you will create a layers.Lambda layer. This is a good way to write concise code. Next, you will write a new layer via subclassing, which gives you more control. Both layers will randomly invert the colors in an image, accoring to some probability.
def random_invert_img(x, p=0.5):if  tf.random.uniform([]) < p:x = (255-x)else:xreturn xdef random_invert(factor=0.5):return layers.Lambda(lambda x: random_invert_img(x, factor))random_invert = random_invert()plt.figure(figsize=(10, 10))
for i in range(9):augmented_image = random_invert(image)ax = plt.subplot(3, 3, i + 1)plt.imshow(augmented_image[0].numpy().astype("uint8"))plt.axis("off")

在这里插入图片描述

#Next, implement a custom layer by subclassing.class RandomInvert(layers.Layer):def __init__(self, factor=0.5, **kwargs):super().__init__(**kwargs)self.factor = factordef call(self, x):return random_invert_img(x)_ = plt.imshow(RandomInvert()(image)[0])

在这里插入图片描述

9.Using tf.image

Since the flowers dataset was previously configured with data augmentation, let's reimport it to start fresh.(train_ds, val_ds, test_ds), metadata = tfds.load('tf_flowers',split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],with_info=True,as_supervised=True,
)
#Retrieve an image to work with.image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))

在这里插入图片描述

Let's use the following function to visualize and compare the original and augmented images side-by-side.def visualize(original, augmented):fig = plt.figure()plt.subplot(1,2,1)plt.title('Original image')plt.imshow(original)plt.subplot(1,2,2)plt.title('Augmented image')plt.imshow(augmented)
#Data augmentation
#Flipping the image
3Flip the image either vertically or horizontally.flipped = tf.image.flip_left_right(image)
visualize(image, flipped)

在这里插入图片描述

#Grayscale an image.grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()

在这里插入图片描述

#Saturate an image by providing a saturation factor.saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)

在这里插入图片描述

#Change image brightness
#Change the brightness of image by providing a brightness factor.bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)

在这里插入图片描述

#Center crop the image
#Crop the image from center up to the image part you desire.cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image,cropped)

在这里插入图片描述

#Rotate the image
#Rotate an image by 90 degrees.rotated = tf.image.rot90(image)
visualize(image, rotated)

在这里插入图片描述

#Apply augmentation to a dataset
#As before, apply data augmentation to a dataset using Dataset.map.def resize_and_rescale(image, label):image = tf.cast(image, tf.float32)image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])image = (image / 255.0)return image, labeldef augment(image,label):image, label = resize_and_rescale(image, label)# Add 6 pixels of paddingimage = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6) # Random crop back to the original sizeimage = tf.image.random_crop(image, size=[IMG_SIZE, IMG_SIZE, 3])image = tf.image.random_brightness(image, max_delta=0.5) # Random brightnessimage = tf.clip_by_value(image, 0, 1)return image, label
#Configure the datasets
train_ds = (train_ds.shuffle(1000).map(augment, num_parallel_calls=AUTOTUNE).batch(batch_size).prefetch(AUTOTUNE)
) val_ds = (val_ds.map(resize_and_rescale, num_parallel_calls=AUTOTUNE).batch(batch_size).prefetch(AUTOTUNE)
)test_ds = (test_ds.map(resize_and_rescale, num_parallel_calls=AUTOTUNE).batch(batch_size).prefetch(AUTOTUNE)
)#These datasets can now be used to train a model as shown previously.

这篇关于2.9-tf2-数据增强-tf_flowers的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/365916

相关文章

SQL中如何添加数据(常见方法及示例)

《SQL中如何添加数据(常见方法及示例)》SQL全称为StructuredQueryLanguage,是一种用于管理关系数据库的标准编程语言,下面给大家介绍SQL中如何添加数据,感兴趣的朋友一起看看吧... 目录在mysql中,有多种方法可以添加数据。以下是一些常见的方法及其示例。1. 使用INSERT I

Python使用vllm处理多模态数据的预处理技巧

《Python使用vllm处理多模态数据的预处理技巧》本文深入探讨了在Python环境下使用vLLM处理多模态数据的预处理技巧,我们将从基础概念出发,详细讲解文本、图像、音频等多模态数据的预处理方法,... 目录1. 背景介绍1.1 目的和范围1.2 预期读者1.3 文档结构概述1.4 术语表1.4.1 核

MySQL 删除数据详解(最新整理)

《MySQL删除数据详解(最新整理)》:本文主要介绍MySQL删除数据的相关知识,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧... 目录一、前言二、mysql 中的三种删除方式1.DELETE语句✅ 基本语法: 示例:2.TRUNCATE语句✅ 基本语

MyBatisPlus如何优化千万级数据的CRUD

《MyBatisPlus如何优化千万级数据的CRUD》最近负责的一个项目,数据库表量级破千万,每次执行CRUD都像走钢丝,稍有不慎就引起数据库报警,本文就结合这个项目的实战经验,聊聊MyBatisPl... 目录背景一、MyBATis Plus 简介二、千万级数据的挑战三、优化 CRUD 的关键策略1. 查

python实现对数据公钥加密与私钥解密

《python实现对数据公钥加密与私钥解密》这篇文章主要为大家详细介绍了如何使用python实现对数据公钥加密与私钥解密,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录公钥私钥的生成使用公钥加密使用私钥解密公钥私钥的生成这一部分,使用python生成公钥与私钥,然后保存在两个文

mysql中的数据目录用法及说明

《mysql中的数据目录用法及说明》:本文主要介绍mysql中的数据目录用法及说明,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录1、背景2、版本3、数据目录4、总结1、背景安装mysql之后,在安装目录下会有一个data目录,我们创建的数据库、创建的表、插入的

Navicat数据表的数据添加,删除及使用sql完成数据的添加过程

《Navicat数据表的数据添加,删除及使用sql完成数据的添加过程》:本文主要介绍Navicat数据表的数据添加,删除及使用sql完成数据的添加过程,具有很好的参考价值,希望对大家有所帮助,如有... 目录Navicat数据表数据添加,删除及使用sql完成数据添加选中操作的表则出现如下界面,查看左下角从左

SpringBoot中4种数据水平分片策略

《SpringBoot中4种数据水平分片策略》数据水平分片作为一种水平扩展策略,通过将数据分散到多个物理节点上,有效解决了存储容量和性能瓶颈问题,下面小编就来和大家分享4种数据分片策略吧... 目录一、前言二、哈希分片2.1 原理2.2 SpringBoot实现2.3 优缺点分析2.4 适用场景三、范围分片

Redis分片集群、数据读写规则问题小结

《Redis分片集群、数据读写规则问题小结》本文介绍了Redis分片集群的原理,通过数据分片和哈希槽机制解决单机内存限制与写瓶颈问题,实现分布式存储和高并发处理,但存在通信开销大、维护复杂及对事务支持... 目录一、分片集群解android决的问题二、分片集群图解 分片集群特征如何解决的上述问题?(与哨兵模

浅析如何保证MySQL与Redis数据一致性

《浅析如何保证MySQL与Redis数据一致性》在互联网应用中,MySQL作为持久化存储引擎,Redis作为高性能缓存层,两者的组合能有效提升系统性能,下面我们来看看如何保证两者的数据一致性吧... 目录一、数据不一致性的根源1.1 典型不一致场景1.2 关键矛盾点二、一致性保障策略2.1 基础策略:更新数