论文笔记(含代码)Tumor attention networks: Better feature selection, better tumor segmentation

本文主要是介绍论文笔记(含代码)Tumor attention networks: Better feature selection, better tumor segmentation,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

肿瘤注意网络:更好的特征选择,更好的肿瘤分割

ELSEVIER  Neural Networks 2021

1.提出了一种精确的自动肿瘤分割方法(TA-Net),通过充分利用卷积神经网络和视觉注意机制,用于临床肝脏计算机断层扫描。

2.设计了一个新的肝脏肿瘤分割管道,从不同角度利用各种类型的网络模块,如Encoder Blocks(预训练网络),各种模块和块重复几次(网络深度)、Inception Blocks 和 Context Blocks(网络宽度)、Decoder Blocks(参数缩减)、Skip Connections(信息融合)和 Tumor Attention Blocks(视觉注意方案和网络基数)。

3.对两种流行的skip connection(Residual残差 Connection vs Concat Connection)和无skip connection进行了深入分析和比较:

评估指标

 

Dice coefficient (DC)   Volume Overlap Error (VOE)   Relative Volume Error (RVD)

Average Symmetric Surface Distance (ASD)      Maximum Surface Distance (MSD)

Dice 系数、体积重叠误差(VOE)、相对体积误差(RVD)

平均对称表面距离(ASD/ASDD)、均方根对称面距离(RMSD) 

import torch
import torch.nn as nn
from torchvision import models
import torch.nn.functional as Ffrom functools import partialimport Constantsnonlinearity = partial(F.relu, inplace=True)class DACblock(nn.Module):def __init__(self, channel):super(DACblock, self).__init__()self.dilate1 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.dilate2 = nn.Conv2d(channel, channel, kernel_size=3, dilation=3, padding=3)self.dilate3 = nn.Conv2d(channel, channel, kernel_size=3, dilation=5, padding=5)self.conv1x1 = nn.Conv2d(channel, channel, kernel_size=1, dilation=1, padding=0)for m in self.modules():if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):if m.bias is not None:m.bias.data.zero_()def forward(self, x):dilate1_out = nonlinearity(self.dilate1(x))dilate2_out = nonlinearity(self.conv1x1(self.dilate2(x)))dilate3_out = nonlinearity(self.conv1x1(self.dilate2(self.dilate1(x))))dilate4_out = nonlinearity(self.conv1x1(self.dilate3(self.dilate2(self.dilate1(x)))))out = x + dilate1_out + dilate2_out + dilate3_out + dilate4_outreturn outclass DACblock_without_atrous(nn.Module):def __init__(self, channel):super(DACblock_without_atrous, self).__init__()self.dilate1 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.dilate2 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.dilate3 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.conv1x1 = nn.Conv2d(channel, channel, kernel_size=1, dilation=1, padding=0)for m in self.modules():if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):if m.bias is not None:m.bias.data.zero_()def forward(self, x):dilate1_out = nonlinearity(self.dilate1(x))dilate2_out = nonlinearity(self.conv1x1(self.dilate2(x)))dilate3_out = nonlinearity(self.conv1x1(self.dilate2(self.dilate1(x))))dilate4_out = nonlinearity(self.conv1x1(self.dilate3(self.dilate2(self.dilate1(x)))))out = x + dilate1_out + dilate2_out + dilate3_out + dilate4_outreturn outclass DACblock_with_inception(nn.Module):def __init__(self, channel):super(DACblock_with_inception, self).__init__()self.dilate1 = nn.Conv2d(channel, channel, kernel_size=1, dilation=1, padding=0)self.dilate3 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.conv1x1 = nn.Conv2d(2 * channel, channel, kernel_size=1, dilation=1, padding=0)for m in self.modules():if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):if m.bias is not None:m.bias.data.zero_()def forward(self, x):dilate1_out = nonlinearity(self.dilate1(x))dilate2_out = nonlinearity(self.dilate3(self.dilate1(x)))dilate_concat = nonlinearity(self.conv1x1(torch.cat([dilate1_out, dilate2_out], 1)))dilate3_out = nonlinearity(self.dilate1(dilate_concat))out = x + dilate3_outreturn outclass DACblock_with_inception_blocks(nn.Module):def __init__(self, channel):super(DACblock_with_inception_blocks, self).__init__()self.conv1x1 = nn.Conv2d(channel, channel, kernel_size=1, dilation=1, padding=0)self.conv3x3 = nn.Conv2d(channel, channel, kernel_size=3, dilation=1, padding=1)self.conv5x5 = nn.Conv2d(channel, channel, kernel_size=5, dilation=1, padding=2)self.pooling = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)for m in self.modules():if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):if m.bias is not None:m.bias.data.zero_()def forward(self, x):dilate1_out = nonlinearity(self.conv1x1(x))dilate2_out = nonlinearity(self.conv3x3(self.conv1x1(x)))dilate3_out = nonlinearity(self.conv5x5(self.conv1x1(x)))dilate4_out = self.pooling(x)out = dilate1_out + dilate2_out + dilate3_out + dilate4_outreturn outclass PSPModule(nn.Module):def __init__(self, features, out_features=1024, sizes=(2, 3, 6, 14)):super().__init__()self.stages = []self.stages = nn.ModuleList([self._make_stage(features, size) for size in sizes])self.bottleneck = nn.Conv2d(features * (len(sizes) + 1), out_features, kernel_size=1)self.relu = nn.ReLU()def _make_stage(self, features, size):prior = nn.AdaptiveAvgPool2d(output_size=(size, size))conv = nn.Conv2d(features, features, kernel_size=1, bias=False)return nn.Sequential(prior, conv)def forward(self, feats):h, w = feats.size(2), feats.size(3)priors = [F.upsample(input=stage(feats), size=(h, w), mode='bilinear') for stage in self.stages] + [feats]bottle = self.bottleneck(torch.cat(priors, 1))return self.relu(bottle)class SPPblock(nn.Module):def __init__(self, in_channels):super(SPPblock, self).__init__()self.pool1 = nn.MaxPool2d(kernel_size=[2, 2], stride=2)self.pool2 = nn.MaxPool2d(kernel_size=[3, 3], stride=3)self.pool3 = nn.MaxPool2d(kernel_size=[5, 5], stride=5)self.pool4 = nn.MaxPool2d(kernel_size=[6, 6], stride=6)self.conv = nn.Conv2d(in_channels=in_channels, out_channels=1, kernel_size=1, padding=0)def forward(self, x):self.in_channels, h, w = x.size(1), x.size(2), x.size(3)self.layer1 = F.upsample(self.conv(self.pool1(x)), size=(h, w), mode='bilinear')self.layer2 = F.upsample(self.conv(self.pool2(x)), size=(h, w), mode='bilinear')self.layer3 = F.upsample(self.conv(self.pool3(x)), size=(h, w), mode='bilinear')self.layer4 = F.upsample(self.conv(self.pool4(x)), size=(h, w), mode='bilinear')out = torch.cat([self.layer1, self.layer2, self.layer3, self.layer4, x], 1)return outclass DecoderBlock(nn.Module):def __init__(self, in_channels, n_filters):super(DecoderBlock, self).__init__()self.conv1 = nn.Conv2d(in_channels, in_channels // 4, 1)self.norm1 = nn.BatchNorm2d(in_channels // 4)self.relu1 = nonlinearityself.deconv2 = nn.ConvTranspose2d(in_channels // 4, in_channels // 4, 3, stride=2, padding=1, output_padding=1)self.norm2 = nn.BatchNorm2d(in_channels // 4)self.relu2 = nonlinearityself.conv3 = nn.Conv2d(in_channels // 4, n_filters, 1)self.norm3 = nn.BatchNorm2d(n_filters)self.relu3 = nonlinearitydef forward(self, x):x = self.conv1(x)x = self.norm1(x)x = self.relu1(x)x = self.deconv2(x)x = self.norm2(x)x = self.relu2(x)x = self.conv3(x)x = self.norm3(x)x = self.relu3(x)return xclass ChannelMeanAttention(nn.Module):def __init__(self, num_channels):super(ChannelMeanAttention, self).__init__()num_channels_reduced = num_channels // 2self.fc1 = nn.Linear(num_channels, num_channels_reduced, bias=True)self.fc2 = nn.Linear(num_channels_reduced, num_channels, bias=True)self.relu = nonlinearitydef forward(self, input_tensor):batch_size, num_channels, H, W = input_tensor.size()squeeze_tensor = input_tensor.view(batch_size, num_channels, -1).mean(dim=2)fc_out_1 = self.relu(self.fc1(squeeze_tensor))fc_out_2 = F.sigmoid(self.fc2(fc_out_1))a, b = squeeze_tensor.size()output_tensor = torch.mul(input_tensor, fc_out_2.view(a, b, 1, 1))return output_tensorclass ChannelMeanMaxAttention(nn.Module):def __init__(self, num_channels):super(ChannelMeanMaxAttention, self).__init__()num_channels_reduced = num_channels // 2self.fc1 = nn.Linear(num_channels, num_channels_reduced, bias = True)self.fc2 = nn.Linear(num_channels_reduced, num_channels, bias = True)self.relu = nonlinearitydef forward(self, input_tensor):batch_size, num_channels, H, W = input_tensor.size()squeeze_tensor_mean = input_tensor.view(batch_size, num_channels, -1).mean(dim=2)fc_out_1_mean = self.relu(self.fc1(squeeze_tensor_mean))fc_out_2_mean = self.fc2(fc_out_1_mean)squeeze_tensor_max = input_tensor.view(batch_size, num_channels, -1).max(dim=2)[0]fc_out_1_max = self.relu(self.fc1(squeeze_tensor_max))fc_out_2_max = self.fc2(fc_out_1_max) a, b = squeeze_tensor_mean.size()result = torch.Tensor(a, b)result = torch.add(fc_out_2_mean, fc_out_2_max)fc_out_2 = F.sigmoid(result)output_tensor = torch.mul(input_tensor, fc_out_2.view(a, b, 1, 1))return output_tensorclass SpatialAttention(nn.Module):def __init__(self, kernel_size=7):super(SpatialAttention, self).__init__()padding = 3self.conv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)self.sigmoid = nn.Sigmoid()def forward(self, x):input_tensor = xavg_out = torch.mean(x, dim=1, keepdim = True)max_out, _ = torch.max(x, dim=1, keepdim = True)x = torch.cat([avg_out, max_out], dim =1)x = self.conv1(x)return self.sigmoid(x) * input_tensorclass TA_Net_(nn.Module):def __init__(self, num_classes=Constants.BINARY_CLASS, num_channels=3):super(TA_Net_, self).__init__()filters = [64, 128, 256, 512]resnet = models.resnet34(pretrained=True)self.firstconv = resnet.conv1self.firstbn = resnet.bn1self.firstrelu = resnet.reluself.firstmaxpool = resnet.maxpoolself.encoder1 = resnet.layer1self.encoder2 = resnet.layer2self.encoder3 = resnet.layer3self.encoder4 = resnet.layer4self.dblock = DACblock(512)self.spp = SPPblock(512)self.decoder4 = DecoderBlock(516, filters[2])self.channelmeanmaxattention1 = ChannelMeanMaxAttention(filters[2]*2)self.spatialattention1 = SpatialAttention()self.decoder3 = DecoderBlock(filters[2]*2, filters[1])self.channelmeanmaxattention2 = ChannelMeanMaxAttention(filters[1]*2)self.spatialattention2 = SpatialAttention()self.decoder2 = DecoderBlock(filters[1]*2, filters[0])self.channelmeanmaxattention3 = ChannelMeanMaxAttention(filters[0]*2)self.spatialattention3 = SpatialAttention()self.decoder1 = DecoderBlock(filters[0]*2, filters[0])self.finaldeconv1 = nn.ConvTranspose2d(filters[0], 32, 4, 2, 1)self.finalrelu1 = nonlinearityself.finalconv2 = nn.Conv2d(32, 32, 3, padding=1)self.finalrelu2 = nonlinearityself.finalconv3 = nn.Conv2d(32, num_classes, 3, padding=1)def forward(self, x):# Encoderx = self.firstconv(x)x = self.firstbn(x)x = self.firstrelu(x)x = self.firstmaxpool(x)e1 = self.encoder1(x)e2 = self.encoder2(e1)e3 = self.encoder3(e2)e4 = self.encoder4(e3)# Centere4 = self.dblock(e4)e4 = self.spp(e4)# Decoderd4_before = torch.cat([self.decoder4(e4), e3], 1)d4 = self.channelmeanmaxattention1(d4_before)d4 = self.spatialattention1(d4)d3_before = torch.cat([self.decoder3(d4), e2], 1)d3 = self.channelmeanmaxattention2(d3_before)d3 = self.spatialattention2(d3)d2_before = torch.cat([self.decoder2(d3), e1], 1)d2 = self.channelmeanmaxattention3(d2_before)d2 = self.spatialattention3(d2)d1 = self.decoder1(d2)out = self.finaldeconv1(d1)out = self.finalrelu1(out)out = self.finalconv2(out)out = self.finalrelu2(out)out = self.finalconv3(out)return F.sigmoid(out), [d4_before, d3_before, d2_before]  # pang for before#return F.sigmoid(out), [d4,d3,d2]def output_features(self, x):  # this way doesnot work.# Encoderx = self.firstconv(x)x = self.firstbn(x)x = self.firstrelu(x)x = self.firstmaxpool(x)e1 = self.encoder1(x)e2 = self.encoder2(e1)e3 = self.encoder3(e2)e4 = self.encoder4(e3)# Centere4 = self.dblock(e4)e4 = self.spp(e4)# Decoderd4 = torch.cat([self.decoder4(e4), e3], 1)d4 = self.channelmeanmaxattention1(d4)d4 = self.spatialattention1(d4)d3 = torch.cat([self.decoder3(d4), e2], 1)d3 = self.channelmeanmaxattention2(d3)d3 = self.spatialattention2(d3)d2 = torch.cat([self.decoder2(d3), e1], 1)d2 = self.channelmeanmaxattention3(d2)d2 = self.spatialattention3(d2)d1 = self.decoder1(d2)out = self.finaldeconv1(d1)out = self.finalrelu1(out)out = self.finalconv2(out)out = self.finalrelu2(out)out = self.finalconv3(out)return d4, d3, d2

这篇关于论文笔记(含代码)Tumor attention networks: Better feature selection, better tumor segmentation的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/401721

相关文章

活用c4d官方开发文档查询代码

当你问AI助手比如豆包,如何用python禁止掉xpresso标签时候,它会提示到 这时候要用到两个东西。https://developers.maxon.net/论坛搜索和开发文档 比如这里我就在官方找到正确的id描述 然后我就把参数标签换过来

poj 1258 Agri-Net(最小生成树模板代码)

感觉用这题来当模板更适合。 题意就是给你邻接矩阵求最小生成树啦。~ prim代码:效率很高。172k...0ms。 #include<stdio.h>#include<algorithm>using namespace std;const int MaxN = 101;const int INF = 0x3f3f3f3f;int g[MaxN][MaxN];int n

AI hospital 论文Idea

一、Benchmarking Large Language Models on Communicative Medical Coaching: A Dataset and a Novel System论文地址含代码 大多数现有模型和工具主要迎合以患者为中心的服务。这项工作深入探讨了LLMs在提高医疗专业人员的沟通能力。目标是构建一个模拟实践环境,人类医生(即医学学习者)可以在其中与患者代理进行医学

【学习笔记】 陈强-机器学习-Python-Ch15 人工神经网络(1)sklearn

系列文章目录 监督学习:参数方法 【学习笔记】 陈强-机器学习-Python-Ch4 线性回归 【学习笔记】 陈强-机器学习-Python-Ch5 逻辑回归 【课后题练习】 陈强-机器学习-Python-Ch5 逻辑回归(SAheart.csv) 【学习笔记】 陈强-机器学习-Python-Ch6 多项逻辑回归 【学习笔记 及 课后题练习】 陈强-机器学习-Python-Ch7 判别分析 【学

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识

计算机毕业设计 大学志愿填报系统 Java+SpringBoot+Vue 前后端分离 文档报告 代码讲解 安装调试

🍊作者:计算机编程-吉哥 🍊简介:专业从事JavaWeb程序开发,微信小程序开发,定制化项目、 源码、代码讲解、文档撰写、ppt制作。做自己喜欢的事,生活就是快乐的。 🍊心愿:点赞 👍 收藏 ⭐评论 📝 🍅 文末获取源码联系 👇🏻 精彩专栏推荐订阅 👇🏻 不然下次找不到哟~Java毕业设计项目~热门选题推荐《1000套》 目录 1.技术选型 2.开发工具 3.功能

代码随想录冲冲冲 Day39 动态规划Part7

198. 打家劫舍 dp数组的意义是在第i位的时候偷的最大钱数是多少 如果nums的size为0 总价值当然就是0 如果nums的size为1 总价值是nums[0] 遍历顺序就是从小到大遍历 之后是递推公式 对于dp[i]的最大价值来说有两种可能 1.偷第i个 那么最大价值就是dp[i-2]+nums[i] 2.不偷第i个 那么价值就是dp[i-1] 之后取这两个的最大值就是d

pip-tools:打造可重复、可控的 Python 开发环境,解决依赖关系,让代码更稳定

在 Python 开发中,管理依赖关系是一项繁琐且容易出错的任务。手动更新依赖版本、处理冲突、确保一致性等等,都可能让开发者感到头疼。而 pip-tools 为开发者提供了一套稳定可靠的解决方案。 什么是 pip-tools? pip-tools 是一组命令行工具,旨在简化 Python 依赖关系的管理,确保项目环境的稳定性和可重复性。它主要包含两个核心工具:pip-compile 和 pip

论文翻译:arxiv-2024 Benchmark Data Contamination of Large Language Models: A Survey

Benchmark Data Contamination of Large Language Models: A Survey https://arxiv.org/abs/2406.04244 大规模语言模型的基准数据污染:一项综述 文章目录 大规模语言模型的基准数据污染:一项综述摘要1 引言 摘要 大规模语言模型(LLMs),如GPT-4、Claude-3和Gemini的快

什么是 Flash Attention

Flash Attention 是 由 Tri Dao 和 Dan Fu 等人在2022年的论文 FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness 中 提出的, 论文可以从 https://arxiv.org/abs/2205.14135 页面下载,点击 View PDF 就可以下载。 下面我