本文主要是介绍CV炼丹心得总结,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1,ResNet的思想 y=F(x)+x 这个经验可帮助模型更快的收敛
class Block(nn.Module): # Encoder Blockdef __init__(self,dim, # 每个token的维度drop_rate=0.1,switch_flag=False,num_heads=8):super(Block, self).__init__()self.switch_flag = switch_flagself.norm1 = nn.GroupNorm(1, dim)# self.norm1 = nn.BatchNorm2d(dim)if self.switch_flag:self.attn = MHSA(n_dims=dim, num_heads=num_heads)else:# self.attn = nn.AdaptiveAvgPool2d((16, 16))self.attn = Pooling()self.drop_path = DropPath(drop_rate) if drop_rate > 0. else nn.Identity()self.norm2 = nn.GroupNorm(1, dim)self.mlp = MLP(in_features=dim, drop=drop_rate)def forward(self, x):x = x + self.drop_path(self.attn(self.norm1(x)))x = x + self.mlp(self.norm2(x))return x
2,在模型最后输出分类的时候,最好有个归一化层
(head): Sequential(
(global_pool): SelectAdaptivePool2d (pool_type=avg, flatten=Identity())
(norm): LayerNorm2d((512,), eps=1e-06, elementwise_affine=True)
(flatten): Flatten(start_dim=1, end_dim=-1)
(drop): Identity()
(fc): Linear(in_features=512, out_features=1000, bias=True)
)
self.num_features = dims[len(dims) - 1]self.head = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)), # [15,64,16,16] --> [15,64,1,1]nn.GroupNorm(1, self.num_features, eps=1e-06),# nn.BatchNorm2d(self.num_features),nn.Flatten(1), # [15,64,1,1] --> [15,64]nn.Linear(self.num_features, num_classes) # [15,64] --> [15,10])
3,在模型Block当中处理的特征图Feature Map,size越小,运行速度越快
(*) 比如下面的例子当中,8*8 运行的速度就比 16*16运行的速度快。
1)self.embedding = nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, bias=False) # [N, C, 16, 16]2)self.embedding = nn.Conv2d(3, 64, kernel_size=(7, 7), stride=(4, 4), padding=(2, 2)) # [N, C, 8, 8]
这篇关于CV炼丹心得总结的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!