本文主要是介绍机器学习基石笔记 Lecture 2: Learning to Answer Yes/No,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
Lecture 2: Learning to Answer Yes/No
Perceptron
A Simple Hypothesis Set: the ‘Perceptron’
感知器类比神经网络,threshold类比考试60分及格
Vector Form of Perceptron Hypothesis
each ‘tall’ W represents a hypothesis h & is multiplied with ‘tall’
Perceptrons in R2
Fun time
Select g from H
遍历是不现实的,所以还是迭代吧
Perceptron Learning Algorithm
A fault confessed is half redressed.
因为 wTtxn(t)=∥wt∥∥xn(t)∥cos(wt,xn(t)) ,所以当二者夹角大于90°的时候,内积为-,反之为+
Fun time
说明了什么含义 ? ② 为什么不对?
Implementation
start from some w0 (say, 0,并不是随机的初始化), and ‘correct’ its mistakes on D next can follow naïve cycle (1, · · · , N) or precomputed random cycle
(note: made
xi≫x0=1 for visual purpose) Why ?Issues of PLA
Linear Separability
assume linear separable D ,does PLA always halt?halts!
因为wTfwT∥wf∥∥wT∥<=1 ,所以T肯定有上限PLA Fact: wt Gets More Aligned with wf
wt appears more aligned with wf after update really?
PLA Fact: wt Does Not Grow Too Fast
wTfwT≥wTfwT−1+minnynwTfxn≥wfw0+TminnynwTfxn≥TminnynwTfxn≥ρT∥wf∥2(A)
∥wT∥2≤∥wT−1∥2+maxn∥ynxn∥≤∥w0∥2+Tmax∥ynxn∥2≤Tmax∥ynxn∥2≤TR2(B)
推导过程中需要注意的是, w0=0 ,然后将 (A)、 (B)代入即可得答案为 ②得到是上限,而且无法准确求出,因为 wf 未知
即使 w0≠0 也是能证明有上限的特性
Learning with Noisy Data
NP难问题
Pocket Algorithm
modify PLA algorithm (black lines) by keeping best weights in pocket
这篇关于机器学习基石笔记 Lecture 2: Learning to Answer Yes/No的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!