SEAN 代码略解

2024-03-06 20:38
文章标签 代码 略解 sean

本文主要是介绍SEAN 代码略解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

这篇《SEAN: Image Synthesis with Semantic Region-Adaptive Normalization 》是2020年CVPR的一篇oral,对它的代码做一个梳理。

由于已经做过了关于SPADE的解析,这一篇主要是看看它在SPADE上有什么改进

不同之处一: models/networks/generator.py

    def __init__(self, opt):super().__init__()self.opt = optnf = opt.ngfself.sw, self.sh = self.compute_latent_vector_size(opt)self.Zencoder = Zencoder(3, 512)### 在SEAN中,是默认有一个vae的操作,所以这里要分析一下Zencoderself.fc = nn.Conv2d(self.opt.semantic_nc, 16 * nf, 3, padding=1)self.head_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt, Block_Name='head_0')self.G_middle_0 = SPADEResnetBlock(16 * nf, 16 * nf, opt, Block_Name='G_middle_0')self.G_middle_1 = SPADEResnetBlock(16 * nf, 16 * nf, opt, Block_Name='G_middle_1')self.up_0 = SPADEResnetBlock(16 * nf, 8 * nf, opt, Block_Name='up_0')self.up_1 = SPADEResnetBlock(8 * nf, 4 * nf, opt, Block_Name='up_1')self.up_2 = SPADEResnetBlock(4 * nf, 2 * nf, opt, Block_Name='up_2')self.up_3 = SPADEResnetBlock(2 * nf, 1 * nf, opt, Block_Name='up_3', use_rgb=False)final_nc = nfif opt.num_upsampling_layers == 'most':self.up_4 = SPADEResnetBlock(1 * nf, nf // 2, opt, Block_Name='up_4')final_nc = nf // 2self.conv_img = nn.Conv2d(final_nc, 3, 3, padding=1)self.up = nn.Upsample(scale_factor=2)#self.up = nn.Upsample(scale_factor=2, mode='bilinear')
    def forward(self, input, rgb_img, obj_dic=None):seg = inputx = F.interpolate(seg, size=(self.sh, self.sw))x = self.fc(x)style_codes = self.Zencoder(input=rgb_img, segmap=seg)x = self.head_0(x, seg, style_codes, obj_dic=obj_dic)x = self.up(x)x = self.G_middle_0(x, seg, style_codes, obj_dic=obj_dic)if self.opt.num_upsampling_layers == 'more' or \self.opt.num_upsampling_layers == 'most':x = self.up(x)x = self.G_middle_1(x, seg, style_codes,  obj_dic=obj_dic)x = self.up(x)x = self.up_0(x, seg, style_codes, obj_dic=obj_dic)x = self.up(x)x = self.up_1(x, seg, style_codes, obj_dic=obj_dic)x = self.up(x)x = self.up_2(x, seg, style_codes, obj_dic=obj_dic)x = self.up(x)x = self.up_3(x, seg, style_codes,  obj_dic=obj_dic)# if self.opt.num_upsampling_layers == 'most':#     x = self.up(x)#     x= self.up_4(x, seg, style_codes,  obj_dic=obj_dic)x = self.conv_img(F.leaky_relu(x, 2e-1))x = F.tanh(x)return x

不同之处二:models/networks/architecture.py

class Zencoder(torch.nn.Module):def __init__(self, input_nc, output_nc, ngf=32, n_downsampling=2, norm_layer=nn.InstanceNorm2d):super(Zencoder, self).__init__()self.output_nc = output_ncmodel = [nn.ReflectionPad2d(1), nn.Conv2d(input_nc, ngf, kernel_size=3, padding=0),norm_layer(ngf), nn.LeakyReLU(0.2, False)]### downsamplefor i in range(n_downsampling):mult = 2**imodel += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),norm_layer(ngf * mult * 2), nn.LeakyReLU(0.2, False)]### upsamplefor i in range(1):mult = 2**(n_downsampling - i)model += [nn.ConvTranspose2d(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1, output_padding=1),norm_layer(int(ngf * mult / 2)), nn.LeakyReLU(0.2, False)]###当output_padding=stride-1时,输出的特征图/输入的特征图=stridemodel += [nn.ReflectionPad2d(1), nn.Conv2d(256, output_nc, kernel_size=3, padding=0), nn.Tanh()]self.model = nn.Sequential(*model)def forward(self, input, segmap):codes = self.model(input) #input为style image, 通道为512维,大小和input一样大的特征向量图segmap = F.interpolate(segmap, size=codes.size()[2:], mode='nearest')# print(segmap.shape)# print(codes.shape)b_size = codes.shape[0]# h_size = codes.shape[2]# w_size = codes.shape[3]f_size = codes.shape[1]s_size = segmap.shape[1]codes_vector = torch.zeros((b_size, s_size, f_size), dtype=codes.dtype, device=codes.device)###下面这一步就是在做region-wise average poolingfor i in range(b_size):for j in range(s_size):component_mask_area = torch.sum(segmap.bool()[i, j])### segmap.bool()[i,j] 为第i个batch下的第j个label中的bool形式的mask### 经过torch.sum把这个mask下为true的值加了起来,得到范围在[0,H x W]的值if component_mask_area > 0:### 确保这个label下的segmap里的值不全为0(0意味着不属于任何label),也就是这一类标签是存在的,而不是为空的codes_component_feature = codes[i].masked_select(segmap.bool()[i, j]).reshape(f_size,  component_mask_area).mean(1)### A.masked_select(mask)的用法:根据mask返回A中在mask里对应坐标值为True的值,返回值的大小为所有的True的值flatted后的一维向量### 当mask的大小与A的大小不相等时,会做广播### 对f_szie个维度上的有效区域求均值codes_vector[i][j] = codes_component_feature# codes_avg[i].masked_scatter_(segmap.bool()[i, j], codes_component_mu)return codes_vector#输出结果的大小为[B,s_size, f_size]
class SPADEResnetBlock(nn.Module):def __init__(self, fin, fout, opt, Block_Name=None, use_rgb=True):super().__init__()self.use_rgb = use_rgbself.Block_Name = Block_Nameself.status = opt.status# Attributesself.learned_shortcut = (fin != fout)fmiddle = min(fin, fout)# create conv layersself.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=1)self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=1)if self.learned_shortcut:self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False)# apply spectral norm if specifiedif 'spectral' in opt.norm_G:self.conv_0 = spectral_norm(self.conv_0)self.conv_1 = spectral_norm(self.conv_1)if self.learned_shortcut:self.conv_s = spectral_norm(self.conv_s)# define normalization layersspade_config_str = opt.norm_G.replace('spectral', '')###########  Modifications 1normtype_list = ['spadeinstance3x3', 'spadesyncbatch3x3', 'spadebatch3x3']our_norm_type = 'spadesyncbatch3x3'self.ace_0 = ACE(our_norm_type, fin, 3, ACE_Name= Block_Name + '_ACE_0', status=self.status, spade_params=[spade_config_str, fin, opt.semantic_nc], use_rgb=use_rgb)###########  Modifications 1###########  Modifications 1self.ace_1 = ACE(our_norm_type, fmiddle, 3, ACE_Name= Block_Name + '_ACE_1', status=self.status, spade_params=[spade_config_str, fmiddle, opt.semantic_nc], use_rgb=use_rgb)###########  Modifications 1if self.learned_shortcut:self.ace_s = ACE(our_norm_type, fin, 3, ACE_Name= Block_Name + '_ACE_s', status=self.status, spade_params=[spade_config_str, fin, opt.semantic_nc], use_rgb=use_rgb)# note the resnet block with SPADE also takes in |seg|,# the semantic segmentation map as inputdef forward(self, x, seg, style_codes, obj_dic=None):x_s = self.shortcut(x, seg, style_codes, obj_dic)###########  Modifications 1dx = self.ace_0(x, seg, style_codes, obj_dic)dx = self.conv_0(self.actvn(dx))dx = self.ace_1(dx, seg, style_codes, obj_dic)dx = self.conv_1(self.actvn(dx))###########  Modifications 1out = x_s + dxreturn outdef shortcut(self, x, seg, style_codes, obj_dic):if self.learned_shortcut:x_s = self.ace_s(x, seg, style_codes, obj_dic)x_s = self.conv_s(x_s)else:x_s = xreturn x_sdef actvn(self, x):return F.leaky_relu(x, 2e-1)

 

不同之处三: models/networks/normalization.py

class ACE(nn.Module):def __init__(self, config_text, norm_nc, label_nc, ACE_Name=None, status='train', spade_params=None, use_rgb=True):super().__init__()self.ACE_Name = ACE_Nameself.status = statusself.save_npy = Trueself.Spade = SPADE(*spade_params)self.use_rgb = use_rgbself.style_length = 512self.blending_gamma = nn.Parameter(torch.zeros(1), requires_grad=True)self.blending_beta = nn.Parameter(torch.zeros(1), requires_grad=True)self.noise_var = nn.Parameter(torch.zeros(norm_nc), requires_grad=True)assert config_text.startswith('spade')parsed = re.search('spade(\D+)(\d)x\d', config_text)param_free_norm_type = str(parsed.group(1))ks = int(parsed.group(2))pw = ks // 2if param_free_norm_type == 'instance':self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False)elif param_free_norm_type == 'syncbatch':self.param_free_norm = SynchronizedBatchNorm2d(norm_nc, affine=False)elif param_free_norm_type == 'batch':self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False)else:raise ValueError('%s is not a recognized param-free norm type in SPADE'% param_free_norm_type)# The dimension of the intermediate embedding space. Yes, hardcoded.if self.use_rgb:self.create_gamma_beta_fc_layers()self.conv_gamma = nn.Conv2d(self.style_length, norm_nc, kernel_size=ks, padding=pw)self.conv_beta = nn.Conv2d(self.style_length, norm_nc, kernel_size=ks, padding=pw)def forward(self, x, segmap, style_codes=None, obj_dic=None):# Part 1. generate parameter-free normalized activationsadded_noise = (torch.randn(x.shape[0], x.shape[3], x.shape[2], 1).cuda() * self.noise_var).transpose(1, 3)normalized = self.param_free_norm(x + added_noise)# Part 2. produce scaling and bias conditioned on semantic mapsegmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest')if self.use_rgb:[b_size, f_size, h_size, w_size] = normalized.shapemiddle_avg = torch.zeros((b_size, self.style_length, h_size, w_size), device=normalized.device)if self.status == 'UI_mode':############## hard codingfor i in range(1):for j in range(segmap.shape[1]):component_mask_area = torch.sum(segmap.bool()[i, j])if component_mask_area > 0:if obj_dic is None:print('wrong even it is the first input')else:style_code_tmp = obj_dic[str(j)]['ACE']middle_mu = F.relu(self.__getattr__('fc_mu' + str(j))(style_code_tmp))component_mu = middle_mu.reshape(self.style_length, 1).expand(self.style_length,component_mask_area)middle_avg[i].masked_scatter_(segmap.bool()[i, j], component_mu)else:for i in range(b_size):for j in range(segmap.shape[1]):component_mask_area = torch.sum(segmap.bool()[i, j])if component_mask_area > 0:middle_mu = F.relu(self.__getattr__('fc_mu' + str(j))(style_codes[i][j]))component_mu = middle_mu.reshape(self.style_length, 1).expand(self.style_length, component_mask_area)middle_avg[i].masked_scatter_(segmap.bool()[i, j], component_mu)if self.status == 'test' and self.save_npy and self.ACE_Name=='up_2_ACE_0':tmp = style_codes[i][j].cpu().numpy()dir_path = 'styles_test'############### some problem with obj_dic[i]im_name = os.path.basename(obj_dic[i])folder_path = os.path.join(dir_path, 'style_codes', im_name, str(j))if not os.path.exists(folder_path):os.makedirs(folder_path)style_code_path = os.path.join(folder_path, 'ACE.npy')np.save(style_code_path, tmp)gamma_avg = self.conv_gamma(middle_avg)beta_avg = self.conv_beta(middle_avg)gamma_spade, beta_spade = self.Spade(segmap)gamma_alpha = F.sigmoid(self.blending_gamma)beta_alpha = F.sigmoid(self.blending_beta)gamma_final = gamma_alpha * gamma_avg + (1 - gamma_alpha) * gamma_spadebeta_final = beta_alpha * beta_avg + (1 - beta_alpha) * beta_spadeout = normalized * (1 + gamma_final) + beta_finalelse:gamma_spade, beta_spade = self.Spade(segmap)gamma_final = gamma_spadebeta_final = beta_spadeout = normalized * (1 + gamma_final) + beta_finalreturn outdef create_gamma_beta_fc_layers(self):###################  These codes should be replaced with torch.nn.ModuleListstyle_length = self.style_lengthself.fc_mu0 = nn.Linear(style_length, style_length)self.fc_mu1 = nn.Linear(style_length, style_length)self.fc_mu2 = nn.Linear(style_length, style_length)self.fc_mu3 = nn.Linear(style_length, style_length)self.fc_mu4 = nn.Linear(style_length, style_length)self.fc_mu5 = nn.Linear(style_length, style_length)self.fc_mu6 = nn.Linear(style_length, style_length)self.fc_mu7 = nn.Linear(style_length, style_length)self.fc_mu8 = nn.Linear(style_length, style_length)self.fc_mu9 = nn.Linear(style_length, style_length)self.fc_mu10 = nn.Linear(style_length, style_length)self.fc_mu11 = nn.Linear(style_length, style_length)self.fc_mu12 = nn.Linear(style_length, style_length)self.fc_mu13 = nn.Linear(style_length, style_length)self.fc_mu14 = nn.Linear(style_length, style_length)self.fc_mu15 = nn.Linear(style_length, style_length)self.fc_mu16 = nn.Linear(style_length, style_length)self.fc_mu17 = nn.Linear(style_length, style_length)self.fc_mu18 = nn.Linear(style_length, style_length)

未完待续

这篇关于SEAN 代码略解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/781285

相关文章

Python通过模块化开发优化代码的技巧分享

《Python通过模块化开发优化代码的技巧分享》模块化开发就是把代码拆成一个个“零件”,该封装封装,该拆分拆分,下面小编就来和大家简单聊聊python如何用模块化开发进行代码优化吧... 目录什么是模块化开发如何拆分代码改进版:拆分成模块让模块更强大:使用 __init__.py你一定会遇到的问题模www.

springboot循环依赖问题案例代码及解决办法

《springboot循环依赖问题案例代码及解决办法》在SpringBoot中,如果两个或多个Bean之间存在循环依赖(即BeanA依赖BeanB,而BeanB又依赖BeanA),会导致Spring的... 目录1. 什么是循环依赖?2. 循环依赖的场景案例3. 解决循环依赖的常见方法方法 1:使用 @La

使用C#代码在PDF文档中添加、删除和替换图片

《使用C#代码在PDF文档中添加、删除和替换图片》在当今数字化文档处理场景中,动态操作PDF文档中的图像已成为企业级应用开发的核心需求之一,本文将介绍如何在.NET平台使用C#代码在PDF文档中添加、... 目录引言用C#添加图片到PDF文档用C#删除PDF文档中的图片用C#替换PDF文档中的图片引言在当

C#使用SQLite进行大数据量高效处理的代码示例

《C#使用SQLite进行大数据量高效处理的代码示例》在软件开发中,高效处理大数据量是一个常见且具有挑战性的任务,SQLite因其零配置、嵌入式、跨平台的特性,成为许多开发者的首选数据库,本文将深入探... 目录前言准备工作数据实体核心技术批量插入:从乌龟到猎豹的蜕变分页查询:加载百万数据异步处理:拒绝界面

用js控制视频播放进度基本示例代码

《用js控制视频播放进度基本示例代码》写前端的时候,很多的时候是需要支持要网页视频播放的功能,下面这篇文章主要给大家介绍了关于用js控制视频播放进度的相关资料,文中通过代码介绍的非常详细,需要的朋友可... 目录前言html部分:JavaScript部分:注意:总结前言在javascript中控制视频播放

Spring Boot 3.4.3 基于 Spring WebFlux 实现 SSE 功能(代码示例)

《SpringBoot3.4.3基于SpringWebFlux实现SSE功能(代码示例)》SpringBoot3.4.3结合SpringWebFlux实现SSE功能,为实时数据推送提供... 目录1. SSE 简介1.1 什么是 SSE?1.2 SSE 的优点1.3 适用场景2. Spring WebFlu

java之Objects.nonNull用法代码解读

《java之Objects.nonNull用法代码解读》:本文主要介绍java之Objects.nonNull用法代码,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐... 目录Java之Objects.nonwww.chinasem.cnNull用法代码Objects.nonN

SpringBoot实现MD5加盐算法的示例代码

《SpringBoot实现MD5加盐算法的示例代码》加盐算法是一种用于增强密码安全性的技术,本文主要介绍了SpringBoot实现MD5加盐算法的示例代码,文中通过示例代码介绍的非常详细,对大家的学习... 目录一、什么是加盐算法二、如何实现加盐算法2.1 加盐算法代码实现2.2 注册页面中进行密码加盐2.

python+opencv处理颜色之将目标颜色转换实例代码

《python+opencv处理颜色之将目标颜色转换实例代码》OpenCV是一个的跨平台计算机视觉库,可以运行在Linux、Windows和MacOS操作系统上,:本文主要介绍python+ope... 目录下面是代码+ 效果 + 解释转HSV: 关于颜色总是要转HSV的掩膜再标注总结 目标:将红色的部分滤

在C#中调用Python代码的两种实现方式

《在C#中调用Python代码的两种实现方式》:本文主要介绍在C#中调用Python代码的两种实现方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录C#调用python代码的方式1. 使用 Python.NET2. 使用外部进程调用 Python 脚本总结C#调