paddle 图像分割学习总结

2024-02-27 01:58

本文主要是介绍paddle 图像分割学习总结,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档

文章目录

  • 前言
  • 一、图像分割综述
  • 二、FCN
  • U-Net
  • PSPNet
  • DeepLab


前言

课程链接:https://aistudio.baidu.com/aistudio/course/introduce/1767


提示:以下是本篇文章正文内容

一、图像分割综述

根据 不同的任务和数据类型:

-图像分割(image segmentation)(像素级分类)

  • 图像语义分割(image semantic segmentation)
  • 图像实例分割(image instance segmentation)
  • 图像全景分割(image panoptic segmentation)
  • 视频目标分割(video object segmentation)
  • 视频实例分割(video instance segmentation)

示例:

  • 语义分割:给每个pixel分类
  • 实例分割:给每个筐里的object分mask
  • 全景分割:背景pixel分类+框里mask
  • VOS :通常会给定目标的mask,求特定目标的mask
  • VIS :根据目标检测的框,求目标的mask
    在这里插入图片描述

分割网络的评价指标:mIoU 和 mAcc

  • mean intersection-over-unio
    分割每一类别的交并比
  • mean accuracy
    分对的像素 / 所有像素

二、FCN

网络结构:
在这里插入图片描述

代码如下(示例):

import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import to_variable
from paddle.fluid.dygraph import Conv2D
from paddle.fluid.dygraph import Conv2DTranspose
from paddle.fluid.dygraph import Dropout
from paddle.fluid.dygraph import BatchNorm
from paddle.fluid.dygraph import Pool2D
from paddle.fluid.dygraph import Linear
# from vgg import VGG16BNclass VGG(fluid.dygraph.Layer):def __init__(self, layers=16, use_bn=False, num_classes=1000):super(VGG, self).__init__()self.layers = layersself.use_bn = use_bnsupported_layers = [16, 19]assert layers in supported_layersif layers == 16:depth = [2, 2, 3, 3, 3]elif layers == 19:depth = [2, 2, 4, 4, 4]num_channels = [3, 64, 128, 256, 512]num_filters = [64, 128, 256, 512, 512]self.layer1 = fluid.dygraph.Sequential(*self.make_layer(num_channels[0], num_filters[0], depth[0], use_bn, name='layer1'))self.layer2 = fluid.dygraph.Sequential(*self.make_layer(num_channels[1], num_filters[1], depth[1], use_bn, name='layer2'))self.layer3 = fluid.dygraph.Sequential(*self.make_layer(num_channels[2], num_filters[2], depth[2], use_bn, name='layer3'))self.layer4 = fluid.dygraph.Sequential(*self.make_layer(num_channels[3], num_filters[3], depth[3], use_bn, name='layer4'))self.layer5 = fluid.dygraph.Sequential(*self.make_layer(num_channels[4], num_filters[4], depth[4], use_bn, name='layer5'))self.classifier = fluid.dygraph.Sequential(Linear(input_dim=512 * 7 * 7, output_dim=4096, act='relu'),Dropout(),Linear(input_dim=4096, output_dim=4096, act='relu'),Dropout(),Linear(input_dim=4096, output_dim=num_classes))self.out_dim = 512 * 7 * 7def forward(self, inputs):x = self.layer1(inputs)x = fluid.layers.pool2d(x, pool_size=2, pool_stride=2)x = self.layer2(x)x = fluid.layers.pool2d(x, pool_size=2, pool_stride=2)x = self.layer3(x)x = fluid.layers.pool2d(x, pool_size=2, pool_stride=2)x = self.layer4(x)x = fluid.layers.pool2d(x, pool_size=2, pool_stride=2)x = self.layer5(x)x = fluid.layers.pool2d(x, pool_size=2, pool_stride=2)x = fluid.layers.adaptive_pool2d(x, pool_size=(7,7), pool_type='avg')x = fluid.layers.reshape(x, shape=[-1, self.out_dim])x = self.classifier(x)return xclass FCN8s(fluid.dygraph.Layer):# TODO: create fcn8s modeldef __init__(self, num_classes=59):super(FCN8s, self).__init__()self.num_classes = num_classesself.layer1 = fluid.dygraph.Sequential(Conv2D(num_channels=3, num_filters=64, filter_size=3, padding=1),BatchNorm(num_channels=64, act='relu'),Conv2D(num_channels=64, num_filters=64, filter_size=3, padding=1),BatchNorm(num_channels=64, act='relu'),Pool2D(pool_size=2, pool_stride=2, pool_type='max')) # 1/ 2self.layer2 = fluid.dygraph.Sequential(Conv2D(num_channels=64, num_filters=128, filter_size=3, padding=1),BatchNorm(num_channels=128, act='relu'),Conv2D(num_channels=128, num_filters=128, filter_size=3, padding=1),BatchNorm(num_channels=128, act='relu'),Pool2D(pool_size=2, pool_stride=2, pool_type='max')) # 1/ 4self.layer3 = fluid.dygraph.Sequential(Conv2D(num_channels=128, num_filters=256, filter_size=3, padding=1),BatchNorm(num_channels=256, act='relu'),Conv2D(num_channels=256, num_filters=256, filter_size=3, padding=1),BatchNorm(num_channels=256, act='relu'),Conv2D(num_channels=256, num_filters=256, filter_size=3, padding=1),BatchNorm(num_channels=256, act='relu'),Pool2D(pool_size=2, pool_stride=2, pool_type='max')) # 1 / 8self.layer4 = fluid.dygraph.Sequential(Conv2D(num_channels=256, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Conv2D(num_channels=512, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Conv2D(num_channels=512, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Pool2D(pool_size=2, pool_stride=2, pool_type='max')) # 1 / 16self.layer5 = fluid.dygraph.Sequential(Conv2D(num_channels=512, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Conv2D(num_channels=512, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Conv2D(num_channels=512, num_filters=512, filter_size=3, padding=1),BatchNorm(num_channels=512, act='relu'),Pool2D(pool_size=2, pool_stride=2, pool_type='max')) # 1 / 32self.conv67 = fluid.dygraph.Sequential(Conv2D(num_channels=512, num_filters=512, filter_size=1),BatchNorm(num_channels=512, act='relu'),Conv2D(num_channels=512, num_filters=512, filter_size=1),BatchNorm(num_channels=512, act='relu'))self.conv = Conv2D(num_channels=512, num_filters=self.num_classes, filter_size=1)def forward(self, inputs):x = self.layer1(inputs)x = self.layer2(x)x = self.layer3(x)pool3 = x # 1/8 256x = self.layer4(x)pool4 = x # 1/16 512x = self.layer5(x)x = self.conv67(x)x = fluid.layers.interpolate(x, pool4.shape[2:])x = fluid.layers.elementwise_add(pool4, x)x = fluid.layers.interpolate(x, pool3.shape[2:])pool3 = Conv2D(num_channels= 256, num_filters=512, filter_size=1, act='relu')(pool3)x = fluid.layers.elementwise_add(pool3, x)x = fluid.layers.interpolate(x, inputs.shape[2:])x = self.conv(x)return xdef main():with fluid.dygraph.guard():x_data = np.random.rand(2, 3, 512, 512).astype(np.float32)x = to_variable(x_data)model = FCN8s(num_classes=59)model.eval()pred = model(x)print(pred.shape)if __name__ == '__main__':main()

U-Net

网络结构:

  • 采用编码器和解码器的U形结构
  • 输入输出大小不变
  • skip结合方式:concatenation
    在这里插入图片描述

Paper Title: U-net:Convolutional networks for biomedical image segmentaion

在这里插入图片描述

PSPNet

psp模块
在这里插入图片描述

import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import to_variable
from paddle.fluid.dygraph import Layer
from paddle.fluid.dygraph import Conv2D
from paddle.fluid.dygraph import BatchNorm
from paddle.fluid.dygraph import Dropout
from resnet_dilated import ResNet50# pool with different bin_size
# interpolate back to input size
# concat
class PSPModule(Layer):def __init__(self, num_channels, bin_size_list):super(PSPModule, self).__init__()self.bin_size_list = bin_size_listnum_filters = num_channels // len(bin_size_list)self.features = []for i in range(len(bin_size_list)):self.features.append(fluid.dygraph.Sequential(Conv2D(num_channels, num_filters, 1),BatchNorm(num_filters, act='relu')))def forward(self, inputs):out = [inputs]for idx, f in enumerate(self.features):x = fluid.layers.adaptive_pool2d(inputs, self.bin_size_list[idx])x = f(x)x = fluid.layers.interpolate(x, inputs.shape[2:], align_corners=True)out.append(x)out = fluid.layers.concat(out, axis=1)return outclass PSPNet(Layer):def __init__(self, num_classes=59, backbone='resnet50'):super(PSPNet, self).__init__()res = ResNet50()# stem: res.conv, res.pool2d_maxself.layer0 = fluid.dygraph.Sequential(res.conv,res.pool2d_max)self.layer1 = res.layer1self.layer2 = res.layer2self.layer3 = res.layer3self.layer4 = res.layer4num_channels = 2048# psp: 2048 -> 2048*2self.pspmodule = PSPModule(num_channels, [1,2,3,6])num_channels *= 2# cls: 2048*2 -> 512 -> num_classesself.classifier = fluid.dygraph.Sequential(Conv2D(num_channels = num_channels, num_filters=512, filter_size=3, padding=1),BatchNorm(512, act = 'relu'),Dropout(0.1),Conv2D(num_channels=512, num_filters=num_classes, filter_size=1))# aux: 1024 -> 256 -> num_classesdef forward(self, inputs):# aux: tmp_x = layer3x = self.layer0(inputs)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)aux = xx = self.layer4(x)x = self.pspmodule(x)x = self.classifier(x)x = fluid.layers.interpolate(x, inputs.shape[2:])return x, auxdef main():with fluid.dygraph.guard(fluid.CPUPlace()):x_data=np.random.rand(2,3, 473, 473).astype(np.float32)x = to_variable(x_data)model = PSPNet(num_classes=59)model.train()pred, aux = model(x)print(pred.shape, aux.shape)if __name__ =="__main__":main()

DeepLab

aspp模块升级版
在这里插入图片描述

class ASPPPooling(Layer):# TODO:def __init__(self, num_channels, num_filters):super(ASPPPooling, self).__init__()self.features = fluid.dygraph.Sequential(Conv2D(num_channels, num_filters, 1),BatchNorm(num_filters, act='relu'))def forward(self, inputs):n, c, h, w = inputs.shapex = fluid.layers.adaptive_pool2d(inputs, 1)x = self.features(x)x = fluid.layers.interpolate(x, [h, w])return xclass ASPPConv(fluid.dygraph.Sequential):# TODO:def __init__(self, num_channels, num_filters, dilation):super(ASPPConv, self).__init__(Conv2D(num_channels=num_channels, num_filters=num_filters, filter_size=3, padding=dilation, dilation=dilation),BatchNorm(num_filters, act='relu'))class ASPPModule(Layer):# TODO: def __init__(self, num_channels, num_filters, rates):super(ASPPModule, self).__init__()self.features = []self.features.append(fluid.dygraph.Sequential(Conv2D(num_channels, num_filters, 1),BatchNorm(num_filters, act='relu')))self.features.append(ASPPPooling(num_channels, num_filters))    for r in rates:self.features.append(ASPPConv(num_channels, num_filters, r))self.project = fluid.dygraph.Sequential(Conv2D(1280, 256, 1),BatchNorm(256, act='relu'))def forward(self, inputs):res = []for f in self.features:res.append(f(inputs))x = fluid.layers.concat(res, axis=1)x = self.project(x)return xclass DeepLabHead(fluid.dygraph.Sequential):def __init__(self, num_channels, num_classes):super(DeepLabHead, self).__init__(ASPPModule(num_channels, 256, [12, 24, 36]),Conv2D(256, 256, 3, padding=1),BatchNorm(256, act='relu'),Conv2D(256, num_classes, 1))class DeepLab(Layer):# TODO:def __init__(self, num_classes=59):super(DeepLab, self).__init__()bone = ResNet50(pretrained=False, duplicate_blocks=True)self.layer0 = fluid.dygraph.Sequential(bone.conv,bone.pool2d_max)self.layer1 = bone.layer1self.layer2 = bone.layer2self.layer3 = bone.layer3self.layer4 = bone.layer4#multi gridself.layer5 = bone.layer5self.layer6 = bone.layer6self.layer7 = bone.layer7# feature_dim = 2048self.classifier = DeepLabHead(2048, 59)def forward(self, inputs):n, c, h, w = inputs.shapex = self.layer0(inputs)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.layer5(x)x = self.layer6(x)x = self.layer7(x)x = self.classifier(x)x = fluid.layers.interpolate(x, [h, w], align_corners=False)return x

这篇关于paddle 图像分割学习总结的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/750881

相关文章

Kubernetes常用命令大全近期总结

《Kubernetes常用命令大全近期总结》Kubernetes是用于大规模部署和管理这些容器的开源软件-在希腊语中,这个词还有“舵手”或“飞行员”的意思,使用Kubernetes(有时被称为“... 目录前言Kubernetes 的工作原理为什么要使用 Kubernetes?Kubernetes常用命令总

基于WinForm+Halcon实现图像缩放与交互功能

《基于WinForm+Halcon实现图像缩放与交互功能》本文主要讲述在WinForm中结合Halcon实现图像缩放、平移及实时显示灰度值等交互功能,包括初始化窗口的不同方式,以及通过特定事件添加相应... 目录前言初始化窗口添加图像缩放功能添加图像平移功能添加实时显示灰度值功能示例代码总结最后前言本文将

使用Python将长图片分割为若干张小图片

《使用Python将长图片分割为若干张小图片》这篇文章主要为大家详细介绍了如何使用Python将长图片分割为若干张小图片,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... 目录1. python需求的任务2. Python代码的实现3. 代码修改的位置4. 运行结果1. Python需求

Python中实现进度条的多种方法总结

《Python中实现进度条的多种方法总结》在Python编程中,进度条是一个非常有用的功能,它能让用户直观地了解任务的进度,提升用户体验,本文将介绍几种在Python中实现进度条的常用方法,并通过代码... 目录一、简单的打印方式二、使用tqdm库三、使用alive-progress库四、使用progres

Android数据库Room的实际使用过程总结

《Android数据库Room的实际使用过程总结》这篇文章主要给大家介绍了关于Android数据库Room的实际使用过程,详细介绍了如何创建实体类、数据访问对象(DAO)和数据库抽象类,需要的朋友可以... 目录前言一、Room的基本使用1.项目配置2.创建实体类(Entity)3.创建数据访问对象(DAO

C#中字符串分割的多种方式

《C#中字符串分割的多种方式》在C#编程语言中,字符串处理是日常开发中不可或缺的一部分,字符串分割是处理文本数据时常用的操作,它允许我们将一个长字符串分解成多个子字符串,本文给大家介绍了C#中字符串分... 目录1. 使用 string.Split2. 使用正则表达式 (Regex.Split)3. 使用

Java向kettle8.0传递参数的方式总结

《Java向kettle8.0传递参数的方式总结》介绍了如何在Kettle中传递参数到转换和作业中,包括设置全局properties、使用TransMeta和JobMeta的parameterValu... 目录1.传递参数到转换中2.传递参数到作业中总结1.传递参数到转换中1.1. 通过设置Trans的

C# Task Cancellation使用总结

《C#TaskCancellation使用总结》本文主要介绍了在使用CancellationTokenSource取消任务时的行为,以及如何使用Task的ContinueWith方法来处理任务的延... 目录C# Task Cancellation总结1、调用cancellationTokenSource.

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用