Caffe中learning rate 和 weight decay 的理解

2024-05-04 00:32

本文主要是介绍Caffe中learning rate 和 weight decay 的理解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Caffe中learning rate 和 weight decay 的理解

在caffe.proto中 对caffe网络中出现的各项参数做了详细的解释。

1.关于learning rate

  optional float base_lr = 5; // The base learning rate // The learning rate decay policy. The currently implemented learning rate// policies are as follows://    - fixed: always return base_lr.//    - step: return base_lr * gamma ^ (floor(iter / step))//    - exp: return base_lr * gamma ^ iter//    - inv: return base_lr * (1 + gamma * iter) ^ (- power)//    - multistep: similar to step but it allows non uniform steps defined by//      stepvalue//    - poly: the effective learning rate follows a polynomial decay, to be//      zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)//    - sigmoid: the effective learning rate follows a sigmod decay//      return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))//// where base_lr, max_iter, gamma, step, stepvalue and power are defined// in the solver parameter protocol buffer, and iter is the current iteration.optional string lr_policy = 8;optional float gamma = 9; // The parameter to compute the learning rate.optional float power = 10; // The parameter to compute the learning rate.optional float momentum = 11; // The momentum value.optional float weight_decay = 12; // The weight decay.// regularization types supported: L1 and L2// controlled by weight_decayoptional string regularization_type = 29 [default = "L2"];// the stepsize for learning rate policy "step"

2. 关于weight decay

    在机器学习或者模式识别中,会出现overfitting,而当网络逐渐overfitting时网络权值逐渐变大,因此,为了避免出现overfitting,会给误差函数添加一个惩罚项,常用的惩罚项是所有权重的平方乘以一个衰减常量之和。其用来惩罚大的权值。

regularization  controlled by weight_decay

    权值衰减惩罚项使得权值收敛到较小的绝对值,而惩罚大的权值。因为大的权值会使得系统出现过拟合,降低其泛化性能。

The weight_decay parameter govern the regularization term of the neural net.

During training a regularization term is added to the network's loss to compute the backprop gradient. Theweight_decay value determines how dominant this regularization term will be in the gradient computation.

As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have (i.e., deeper net, larger filters, large InnerProduct layers etc.) the higher this term should be.

Caffe also allows you to choose between L2 regularization (default) andL1 regularization, by setting

regularization_type: "L1"

While learning rate may (and usually does) change during training, the regularization weight is fixed throughout.


4.1.1SGD
Stochastic gradient descent ( solver_type: SGD ) updates the weights W by a linear combination of the negative gradient ∇L(W) and the previous weight update V t .
The learning rate α is the weight of the negative gradient. The momentum μ is the weight of the previous update.

Formally, we have the following formulas to compute the update value V t+1 and the updated weights W t+1 at iteration t+1 , given the previous weight update V t and current weights W t :
 V t+1 =μV t −α∇L(W t )

 W t+1 =W t +V t+1

The learning “hyperparameters” ( α and μ ) might require a bit of tuning for best results. If you’re not sure where to start, take a look at the “Rules of thumb” below, and for further information you might refer to Leon Bottou’s Stochastic Gradient Descent Tricks [1]. [1] L. Bottou. Stochastic Gradient Descent Tricks. Neural Networks: Tricks of the Trade:Springer, 2012.

4.1.1.1.Rules of thumb for setting the learning rate α and momentum μ

A good strategy for deep learning with SGD is to initialize the learning rate α to a value
around α≈0.01=10 −2 , and dropping it by a constant factor (e.g., 10) throughout training
when the loss begins to reach an apparent “plateau”, repeating this several times.
Generally, you probably want to use a momentum μ=0.9 or similar value. By smoothing
the weight updates across iterations, momentum tends to make deep learning with SGD
both stabler and faster.

这里 μ = momentum  α =  base_lr 

Difference between neural net weight decay and learning rate

The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled.

So let's say that we have a cost or error function

这篇关于Caffe中learning rate 和 weight decay 的理解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/957984

相关文章

深入理解Apache Airflow 调度器(最新推荐)

《深入理解ApacheAirflow调度器(最新推荐)》ApacheAirflow调度器是数据管道管理系统的关键组件,负责编排dag中任务的执行,通过理解调度器的角色和工作方式,正确配置调度器,并... 目录什么是Airflow 调度器?Airflow 调度器工作机制配置Airflow调度器调优及优化建议最

一文带你理解Python中import机制与importlib的妙用

《一文带你理解Python中import机制与importlib的妙用》在Python编程的世界里,import语句是开发者最常用的工具之一,它就像一把钥匙,打开了通往各种功能和库的大门,下面就跟随小... 目录一、python import机制概述1.1 import语句的基本用法1.2 模块缓存机制1.

深入理解C语言的void*

《深入理解C语言的void*》本文主要介绍了C语言的void*,包括它的任意性、编译器对void*的类型检查以及需要显式类型转换的规则,具有一定的参考价值,感兴趣的可以了解一下... 目录一、void* 的类型任意性二、编译器对 void* 的类型检查三、需要显式类型转换占用的字节四、总结一、void* 的

深入理解Redis大key的危害及解决方案

《深入理解Redis大key的危害及解决方案》本文主要介绍了深入理解Redis大key的危害及解决方案,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着... 目录一、背景二、什么是大key三、大key评价标准四、大key 产生的原因与场景五、大key影响与危

深入理解C++ 空类大小

《深入理解C++空类大小》本文主要介绍了C++空类大小,规定空类大小为1字节,主要是为了保证对象的唯一性和可区分性,满足数组元素地址连续的要求,下面就来了解一下... 目录1. 保证对象的唯一性和可区分性2. 满足数组元素地址连续的要求3. 与C++的对象模型和内存管理机制相适配查看类对象内存在C++中,规

认识、理解、分类——acm之搜索

普通搜索方法有两种:1、广度优先搜索;2、深度优先搜索; 更多搜索方法: 3、双向广度优先搜索; 4、启发式搜索(包括A*算法等); 搜索通常会用到的知识点:状态压缩(位压缩,利用hash思想压缩)。

【生成模型系列(初级)】嵌入(Embedding)方程——自然语言处理的数学灵魂【通俗理解】

【通俗理解】嵌入(Embedding)方程——自然语言处理的数学灵魂 关键词提炼 #嵌入方程 #自然语言处理 #词向量 #机器学习 #神经网络 #向量空间模型 #Siri #Google翻译 #AlexNet 第一节:嵌入方程的类比与核心概念【尽可能通俗】 嵌入方程可以被看作是自然语言处理中的“翻译机”,它将文本中的单词或短语转换成计算机能够理解的数学形式,即向量。 正如翻译机将一种语言

【C++高阶】C++类型转换全攻略:深入理解并高效应用

📝个人主页🌹:Eternity._ ⏩收录专栏⏪:C++ “ 登神长阶 ” 🤡往期回顾🤡:C++ 智能指针 🌹🌹期待您的关注 🌹🌹 ❀C++的类型转换 📒1. C语言中的类型转换📚2. C++强制类型转换⛰️static_cast🌞reinterpret_cast⭐const_cast🍁dynamic_cast 📜3. C++强制类型转换的原因📝

深入理解RxJava:响应式编程的现代方式

在当今的软件开发世界中,异步编程和事件驱动的架构变得越来越重要。RxJava,作为响应式编程(Reactive Programming)的一个流行库,为Java和Android开发者提供了一种强大的方式来处理异步任务和事件流。本文将深入探讨RxJava的核心概念、优势以及如何在实际项目中应用它。 文章目录 💯 什么是RxJava?💯 响应式编程的优势💯 RxJava的核心概念

如何通俗理解注意力机制?

1、注意力机制(Attention Mechanism)是机器学习和深度学习中一种模拟人类注意力的方法,用于提高模型在处理大量信息时的效率和效果。通俗地理解,它就像是在一堆信息中找到最重要的部分,把注意力集中在这些关键点上,从而更好地完成任务。以下是几个简单的比喻来帮助理解注意力机制: 2、寻找重点:想象一下,你在阅读一篇文章的时候,有些段落特别重要,你会特别注意这些段落,反复阅读,而对其他部分