本文主要是介绍[深度学习论文笔记][Weight Initialization] Delving deep into rectifiers: Surpassing human-level performance,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” Proceedings of the IEEE International Conference on Computer Vision. 2015. [Citations: 477].
1 PReLU
[PReLU]
• α is a learnable parameter.
• If α is a fixed small number, PReLU becomes Leaky ReLU (LReLU), but LReLU has negligible impact on accuracy compared with ReLU.
• We allow the α to vary on different channels.
[Backprop]
[Optimization] Do not use weight decay (l_2 regularization) for α_d .
• A weight decay tends to push α d to zero, thus biases PReLU towards ReLU.
• We use α_d = 0.25 as the initialization.
[Experiment] Conv1 has coefficients (0.681 and 0.596) significantly greater than 0.
• Filters of conv1 are mostly Gabor-like filters such as edge or texture detectors.
• The learned results show that both positive and negative responses of the filters are respected.
The deeper conv layers in general have smaller coefficients.
• Activations gradually become “more nonlinear” at increasing depths.
• I.e., the learned model tends to keep more information in earlier stages and becomes more discriminative in deeper stages.
2 Weight Initialization
[Forward Case] Consider ReLU activation function.
Note if x has zero mean, then . And we assume s has zero mean and has a symmetric distribution.
We want
then
[Backward Case]
We want
then
[Issue] When the input signal is not normalized (e.g., in [128, 128])
• Since the variance of the input signal can be roughly preserved from the first layer to the last.
• Its magnitude can be so large that the softmax operator will overflow.
[Solution] Normalize the input signal, but this may impact other hyper-parameters. Another solution is to include a small factor on the weights
among all or some layers. E.g., use a std of 0.01 for the first two fc layers and 0.001 for the last.
这篇关于[深度学习论文笔记][Weight Initialization] Delving deep into rectifiers: Surpassing human-level performance的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!