Machine Learning ---- Gradient Descent

2024-03-16 08:04

本文主要是介绍Machine Learning ---- Gradient Descent,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

一、The concept of gradient:

       ① In a univariate function:

       ②In multivariate functions:

二、Introduction of gradient descent cases:

三、Gradient descent formula and its simple understanding:

四、Formula operation precautions:


一、The concept of gradient:

       ① In a univariate function

        gradient is actually the differentiation of the function, representing the slope of the tangent of the function at a given point

       ②In multivariate functions

        a gradient is a vector with a direction, and the direction of the gradient indicates the direction in which the function rises the fastest at a given point

二、Introduction of gradient descent cases:

       Do you remember the golf course inside the cat and mouse? It looks like this in the animation:

        Let's take a look at these two pictures. You can easily see the distant hill, right? We can take it as the most typical example, and the golf course can also be abstracted into a coordinate map:

        So in this coordinate, we will correspond the following (x, y) to (w, b) respectively. Then, when J (w, b) is at its maximum, which is the peak in the red area of the graph, we start the gradient descent process.

        Firstly, we rotate one circle from the highest point to find the direction with the highest slope. At this point, we can take a small step down. The reason for choosing this direction is actually because it is the steepest direction. If we walk down the same step length, the height of descent will naturally be the highest, and we can also walk faster to the lowest point (local minimum point). At the same time, after each step, we look around and choose. Finally, we can determine this path:Finally reaching the local minimum point A, is this the only minimum point? Of course not:

        It is also possible to reach point B, which is also a local minimum point. At this point, we have introduced the implementation process of gradient descent, and we will further understand its meaning through mathematical formulas.

三、Gradient descent formula and its simple understanding:

        We first provide the formula for gradient descent:

w = w - \alpha \frac{ \partial J(w,b) }{ \partial w }

b = b - \alpha \frac{ \partial J(w,b) }{ \partial b }

        In the formula, \alpha corresponds to what we call the learning rate, and the equal sign is the same as the assignment symbol in computer program code. J (w, b) can be found in the regression equation blog in the previous section. As for the determination of the learning rate, we will share it with you next time. Here, we will first understand the meaning of the formula:

        Firstly, let's simplify the formula and take b equal to 0 as an example. This way, we can better understand its meaning through a two-dimensional Cartesian coordinate system:

        In this J (w, b) coordinate graph, which is a quadratic function, since we consider b in the equation to be 0,So we can assume that \frac{ \partial J(w,b) }{ \partial w } = \frac{ \partial J(w) }{ \partial w },So, such a partial derivative can be seen as the derivative in the unary case. At this point, it can be seen that when \alpha>0 and the corresponding w value is in the right half, the derivative is positive, that is, its slope is positive. This is equivalent to subtracting a positive number from w, and its w point will move to the left, which is the closest to its minimum value, which is the optimal solution. Similarly, when in the left half of the function, its w will move to the right, which is close to the minimum value, So the step size for each movement is \alpha.

        This is a simple understanding of the gradient descent formula.


四、Formula operation precautions:

        This is a simple understanding of the gradient descent formula

        just like this:

temp_w = w - \alpha \frac{ \partial J(w,b) }{ \partial w }

temp_b = b - \alpha \frac{ \partial J(w,b) }{ \partial b }

w = temp_w

b = temp_b

        The following is an incorrect order of operations that should be avoided:

temp_w = w - \alpha \frac{ \partial J(w,b) }{ \partial w }

w = temp_w

temp_b = b - \alpha \frac{ \partial J(w,b) }{ \partial b }

b = temp_b

        This is the understanding of the formula and algorithm implementation for gradient descent. As for the code implementation, we will continue to explain it in future articles.

        Machine Learning ---- Cost function-CSDN博客

这篇关于Machine Learning ---- Gradient Descent的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/814823

相关文章

Nn criterions don’t compute the gradient w.r.t. targets error「pytorch」 (debug笔记)

Nn criterions don’t compute the gradient w.r.t. targets error「pytorch」 ##一、 缘由及解决方法 把这个pytorch-ddpg|github搬到jupyter notebook上运行时,出现错误Nn criterions don’t compute the gradient w.r.t. targets error。注:我用

简单的Q-learning|小明的一维世界(3)

简单的Q-learning|小明的一维世界(1) 简单的Q-learning|小明的一维世界(2) 一维的加速度世界 这个世界,小明只能控制自己的加速度,并且只能对加速度进行如下三种操作:增加1、减少1、或者不变。所以行动空间为: { u 1 = − 1 , u 2 = 0 , u 3 = 1 } \{u_1=-1, u_2=0, u_3=1\} {u1​=−1,u2​=0,u3​=1}

简单的Q-learning|小明的一维世界(2)

上篇介绍了小明的一维世界模型 、Q-learning的状态空间、行动空间、奖励函数、Q-table、Q table更新公式、以及从Q值导出策略的公式等。最后给出最简单的一维位置世界的Q-learning例子,从给出其状态空间、行动空间、以及稠密与稀疏两种奖励函数的设置方式。下面将继续深入,GO! 一维的速度世界 这个世界,小明只能控制自己的速度,并且只能对速度进行如下三种操作:增加1、减

ZOJ 3324 Machine(线段树区间合并)

这道题网上很多代码是错误的,由于后台数据水,他们可以AC。 比如这组数据 10 3 p 0 9 r 0 5 r 6 9 输出应该是 0 1 1 所以有的人直接记录该区间是否被覆盖过的方法是错误的 正确方法应该是记录这段区间的最小高度(就是最接近初始位置的高度),和最小高度对应的最长左区间和右区间 开一个sum记录这段区间最小高度的块数,min_v 记录该区间最小高度 cover

Learning Memory-guided Normality for Anomaly Detection——学习记忆引导的常态异常检测

又是一篇在自编码器框架中研究使用记忆模块的论文,可以看做19年的iccv的论文的衍生,在我的博客中对19年iccv这篇论文也做了简单介绍。韩国人写的,应该是吧,这名字听起来就像。 摘要abstract 我们解决异常检测的问题,即检测视频序列中的异常事件。基于卷积神经网络的异常检测方法通常利用代理任务(如重建输入视频帧)来学习描述正常情况的模型,而在训练时看不到异常样本,并在测试时使用重建误

Learning Temporal Regularity in Video Sequences——视频序列的时间规则性学习

Learning Temporal Regularity in Video Sequences CVPR2016 无监督视频异常事件检测早期工作 摘要 由于对“有意义”的定义不明确以及场景混乱,因此在较长的视频序列中感知有意义的活动是一个具有挑战性的问题。我们通过在非常有限的监督下使用多种来源学习常规运动模式的生成模型(称为规律性)来解决此问题。体来说,我们提出了两种基于自动编码器的方法,以

COD论文笔记 Adaptive Guidance Learning for Camouflaged Object Detection

论文的主要动机、现有方法的不足、拟解决的问题、主要贡献和创新点如下: 动机: 论文的核心动机是解决伪装目标检测(COD)中的挑战性任务。伪装目标检测旨在识别和分割那些在视觉上与周围环境高度相似的目标,这对于计算机视觉来说是非常困难的任务。尽管深度学习方法在该领域取得了一定进展,但现有方法仍面临有效分离目标和背景的难题,尤其是在伪装目标与背景特征高度相似的情况下。 现有方法的不足之处: 过于

One-Shot Imitation Learning

发表时间:NIPS2017 论文链接:https://readpaper.com/pdf-annotate/note?pdfId=4557560538297540609&noteId=2424799047081637376 作者单位:Berkeley AI Research Lab, Work done while at OpenAI Yan Duan†§ , Marcin Andrychow

Introduction to Deep Learning with PyTorch

1、Introduction to PyTorch, a Deep Learning Library 1.1、Importing PyTorch and related packages import torch# supports:## image data with torchvision## audio data with torchaudio## text data with t

《Learning To Count Everything》CVPR2021

摘要 论文提出了一种新的方法来解决视觉计数问题,即在给定类别中仅有少量标注实例的情况下,对任何类别的对象进行计数。将计数问题视为一个少样本回归任务,并提出了一种新颖的方法,该方法通过查询图像和查询图像中的少量示例对象来预测图像中所有感兴趣对象的存在密度图。此外,还提出了一种新颖的适应策略,使网络能够在测试时仅使用新类别中的少量示例对象来适应任何新的视觉类别。为了支持这一任务,作者还引入了一个包含