Pytorch深度学习实践笔记5(b站刘二大人)

2024-05-27 06:04

本文主要是介绍Pytorch深度学习实践笔记5(b站刘二大人),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

🎬个人简介:一个全栈工程师的升级之路!
📋个人专栏:pytorch深度学习
🎀CSDN主页 发狂的小花
🌄人生秘诀:学习的本质就是极致重复!

视频来自【b站刘二大人】

目录

1 Linear Regression

2 Dataloader 数据读取机制

3 代码


1 Linear Regression


使用Pytorch实现,步骤如下:
PyTorch Fashion(风格)

  1. prepare dataset
  2. design model using Class ,前向传播,计算y_pred
  3. Construct loss and optimizer,计算loss,Optimizer 更新w
  4. Training cycle (forward,backward,update)




2 Dataloader 数据读取机制

 

  • Pytorch数据读取机制

一文搞懂Pytorch数据读取机制!_pytorch的batch读取数据-CSDN博客

  • 小批量数据读取
import torch  
import torch.utils.data as Data  BATCH_SIZE = 3x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])dataset = Data.TensorDataset(x_data,y_data)loader = Data.DataLoader(  dataset=dataset,  batch_size=BATCH_SIZE,  shuffle=True,  num_workers=0  
)for epoch in range(3):  for step, (batch_x, batch_y) in enumerate(loader):  print('epoch', epoch,  '| step:', step,  '| batch_x', batch_x,  '| batch_y:', batch_y)  




3 代码

import torch
import torch.utils.data as Data 
import matplotlib.pyplot as plt 
# prepare datasetBATCH_SIZE = 3epoch_list = []
loss_list = []x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])dataset = Data.TensorDataset(x_data,y_data)loader = Data.DataLoader(  dataset=dataset,  batch_size=BATCH_SIZE,  shuffle=True,  num_workers=0  
)#design model using class
"""
our model class should be inherit from nn.Module, which is base class for all neural network modules.
member methods __init__() and forward() have to be implemented
class nn.linear contain two member Tensors: weight and bias
class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can
be called just like a function.Normally the forward() will be called 
"""
class LinearModel(torch.nn.Module):def __init__(self):super(LinearModel, self).__init__()# (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的# 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.biasself.linear = torch.nn.Linear(1, 1)def forward(self, x):y_pred = self.linear(x)return y_predmodel = LinearModel()# construct loss and optimizer
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction = 'sum')
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) # training cycle forward, backward, update
for epoch in range(1000):  for iteration, (batch_x, batch_y) in enumerate(loader):  y_pred = model(batch_x) # forwardloss = criterion(y_pred, batch_y) # backward# print("epoch: ",epoch, " iteration: ",iteration," loss: ",loss.item())optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zeroloss.backward() # backward: autograd,自动计算梯度optimizer.step() # update 参数,即更新w和b的值print("epoch: ",epoch, " loss: ",loss.item())epoch_list.append(epoch)loss_list.append(loss.data.item())if (loss.data.item() < 1e-7):print("Epoch: ",epoch+1,"loss is: ",loss.data.item(),"(w,b): ","(",model.linear.weight.item(),",",model.linear.bias.item(),")")breakprint('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())x_test = torch.tensor([[10.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)plt.plot(epoch_list,loss_list)
plt.title("SGD")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.savefig("./data/pytorch4.png")

  • 几种不同的优化器对应的结果:

Pytorch优化器全总结(三)牛顿法、BFGS、L-BFGS 含代码​

pytorch LBFGS_lbfgs优化器-CSDN博客​

scg.step() missing 1 required positiona-CSDN博客​



 



 



 



 

  • LFBGS 代码

import torch
import torch.utils.data as Data 
import matplotlib.pyplot as plt 
# prepare datasetBATCH_SIZE = 3epoch_list = []
loss_list = []x_data = torch.tensor([[1.0],[2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0]])
y_data = torch.tensor([[2.0],[4.0],[6.0],[8.0],[10.0],[12.0],[14.0],[16.0],[18.0]])dataset = Data.TensorDataset(x_data,y_data)loader = Data.DataLoader(  dataset=dataset,  batch_size=BATCH_SIZE,  shuffle=True,  num_workers=0  
)#design model using class
"""
our model class should be inherit from nn.Module, which is base class for all neural network modules.
member methods __init__() and forward() have to be implemented
class nn.linear contain two member Tensors: weight and bias
class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can
be called just like a function.Normally the forward() will be called 
"""
class LinearModel(torch.nn.Module):def __init__(self):super(LinearModel, self).__init__()# (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的# 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.biasself.linear = torch.nn.Linear(1, 1)def forward(self, x):y_pred = self.linear(x)return y_predmodel = LinearModel()# construct loss and optimizer
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction = 'sum')
optimizer = torch.optim.LBFGS(model.parameters(), lr = 0.1) # model.parameters()自动完成参数的初始化操作,这个地方我可能理解错了loss = torch.Tensor([1000.])
# training cycle forward, backward, update
for epoch in range(1000):  for iteration, (batch_x, batch_y) in enumerate(loader):def closure():y_pred = model(batch_x) # forwardloss = criterion(y_pred, batch_y) # backward# print("epoch: ",epoch, " iteration: ",iteration," loss: ",loss.item())optimizer.zero_grad() # the grad computer by .backward() will be accumulated. so before backward, remember set the grad to zeroloss.backward() # backward: autograd,自动计算梯度return lossloss = closure()optimizer.step(closure) # update 参数,即更新w和b的值print("epoch: ",epoch, " loss: ",loss.item())epoch_list.append(epoch)loss_list.append(loss.data.item())if (loss.data.item() < 1e-7):print("Epoch: ",epoch+1,"loss is: ",loss.data.item(),"(w,b): ","(",model.linear.weight.item(),",",model.linear.bias.item(),")")breakprint('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())x_test = torch.tensor([[10.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)plt.plot(epoch_list,loss_list)
plt.title("LBFGS(lr = 0.1)")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.savefig("./data/pytorch4.png")

  • Rprop:

Rprop 优化方法(弹性反向传播),适用于 full-batch,不适用于 mini-batch,因而在 mini-batch 大行其道的时代里,很少见到。
优点:它可以自动调节学习率,不需要人为调节
缺点:仍依赖于人工设置一个全局学习率,随着迭代次数增多,学习率会越来越小,最终会趋近于0
结果:修改学习率和epoch均不能使其表现良好,无法满足1e-7精度条件下收敛



 

🌈我的分享也就到此结束啦🌈
如果我的分享也能对你有帮助,那就太好了!
若有不足,还请大家多多指正,我们一起学习交流!
📢未来的富豪们:点赞👍→收藏⭐→关注🔍,如果能评论下就太惊喜了!
感谢大家的观看和支持!最后,☺祝愿大家每天有钱赚!!!欢迎关注、关注!

这篇关于Pytorch深度学习实践笔记5(b站刘二大人)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1006667

相关文章

Java学习手册之Filter和Listener使用方法

《Java学习手册之Filter和Listener使用方法》:本文主要介绍Java学习手册之Filter和Listener使用方法的相关资料,Filter是一种拦截器,可以在请求到达Servl... 目录一、Filter(过滤器)1. Filter 的工作原理2. Filter 的配置与使用二、Listen

Spring Boot 整合 SSE的高级实践(Server-Sent Events)

《SpringBoot整合SSE的高级实践(Server-SentEvents)》SSE(Server-SentEvents)是一种基于HTTP协议的单向通信机制,允许服务器向浏览器持续发送实... 目录1、简述2、Spring Boot 中的SSE实现2.1 添加依赖2.2 实现后端接口2.3 配置超时时

Python使用getopt处理命令行参数示例解析(最佳实践)

《Python使用getopt处理命令行参数示例解析(最佳实践)》getopt模块是Python标准库中一个简单但强大的命令行参数处理工具,它特别适合那些需要快速实现基本命令行参数解析的场景,或者需要... 目录为什么需要处理命令行参数?getopt模块基础实际应用示例与其他参数处理方式的比较常见问http

Python中__init__方法使用的深度解析

《Python中__init__方法使用的深度解析》在Python的面向对象编程(OOP)体系中,__init__方法如同建造房屋时的奠基仪式——它定义了对象诞生时的初始状态,下面我们就来深入了解下_... 目录一、__init__的基因图谱二、初始化过程的魔法时刻继承链中的初始化顺序self参数的奥秘默认

Java Optional的使用技巧与最佳实践

《JavaOptional的使用技巧与最佳实践》在Java中,Optional是用于优雅处理null的容器类,其核心目标是显式提醒开发者处理空值场景,避免NullPointerExce... 目录一、Optional 的核心用途二、使用技巧与最佳实践三、常见误区与反模式四、替代方案与扩展五、总结在 Java

Spring Boot循环依赖原理、解决方案与最佳实践(全解析)

《SpringBoot循环依赖原理、解决方案与最佳实践(全解析)》循环依赖指两个或多个Bean相互直接或间接引用,形成闭环依赖关系,:本文主要介绍SpringBoot循环依赖原理、解决方案与最... 目录一、循环依赖的本质与危害1.1 什么是循环依赖?1.2 核心危害二、Spring的三级缓存机制2.1 三

pytorch自动求梯度autograd的实现

《pytorch自动求梯度autograd的实现》autograd是一个自动微分引擎,它可以自动计算张量的梯度,本文主要介绍了pytorch自动求梯度autograd的实现,具有一定的参考价值,感兴趣... autograd是pytorch构建神经网络的核心。在 PyTorch 中,结合以下代码例子,当你

在PyCharm中安装PyTorch、torchvision和OpenCV详解

《在PyCharm中安装PyTorch、torchvision和OpenCV详解》:本文主要介绍在PyCharm中安装PyTorch、torchvision和OpenCV方式,具有很好的参考价值,... 目录PyCharm安装PyTorch、torchvision和OpenCV安装python安装PyTor

Python 中的 with open文件操作的最佳实践

《Python中的withopen文件操作的最佳实践》在Python中,withopen()提供了一个简洁而安全的方式来处理文件操作,它不仅能确保文件在操作完成后自动关闭,还能处理文件操作中的异... 目录什么是 with open()?为什么使用 with open()?使用 with open() 进行

pytorch之torch.flatten()和torch.nn.Flatten()的用法

《pytorch之torch.flatten()和torch.nn.Flatten()的用法》:本文主要介绍pytorch之torch.flatten()和torch.nn.Flatten()的用... 目录torch.flatten()和torch.nn.Flatten()的用法下面举例说明总结torch