【python pytorch】Pytorch实现逻辑回归

2024-09-07 06:18

本文主要是介绍【python pytorch】Pytorch实现逻辑回归,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

pytorch 逻辑回归学习demo:

import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable# Hyper Parameters 
input_size = 784
num_classes = 10
num_epochs = 10
batch_size = 50
learning_rate = 0.001# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(),download=True)print(train_dataset)test_dataset = dsets.MNIST(root='./data', train=False, transform=transforms.ToTensor())# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)# Model
class LogisticRegression(nn.Module):def __init__(self, input_size, num_classes):super(LogisticRegression, self).__init__()self.linear = nn.Linear(input_size, num_classes)def forward(self, x):out = self.linear(x)return outmodel = LogisticRegression(input_size, num_classes)# Loss and Optimizer
# Softmax is internally computed.
# Set parameters to be updated.
criterion = nn.CrossEntropyLoss()  
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)  # Training the Model
for epoch in range(num_epochs):for i, (images, labels) in enumerate(train_loader):images = Variable(images.view(-1, 28*28))labels = Variable(labels)# Forward + Backward + Optimizeoptimizer.zero_grad()outputs = model(images)loss = criterion(outputs, labels)loss.backward()optimizer.step()if (i+1) % 100 == 0:print ('Epoch: [%d/%d], Step: [%d/%d], Loss: %.4f' % (epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))# Test the Model
correct = 0
total = 0
for images, labels in test_loader:images = Variable(images.view(-1, 28*28))outputs = model(images)_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).sum()print('Accuracy of the model on the 10000 test images: %d %%' % (100 * correct / total))# Save the Model
torch.save(model.state_dict(), 'model.pkl')

运行结果:

Epoch: [1/10], Step: [100/1200], Loss: 2.2397
Epoch: [1/10], Step: [200/1200], Loss: 2.1378
Epoch: [1/10], Step: [300/1200], Loss: 2.0500
Epoch: [1/10], Step: [400/1200], Loss: 1.9401
Epoch: [1/10], Step: [500/1200], Loss: 1.9175
Epoch: [1/10], Step: [600/1200], Loss: 1.8203
Epoch: [1/10], Step: [700/1200], Loss: 1.7322
Epoch: [1/10], Step: [800/1200], Loss: 1.6910
Epoch: [1/10], Step: [900/1200], Loss: 1.6678
Epoch: [1/10], Step: [1000/1200], Loss: 1.5577
Epoch: [1/10], Step: [1100/1200], Loss: 1.5113
Epoch: [1/10], Step: [1200/1200], Loss: 1.5671
Epoch: [2/10], Step: [100/1200], Loss: 1.4560
Epoch: [2/10], Step: [200/1200], Loss: 1.3170
Epoch: [2/10], Step: [300/1200], Loss: 1.3822
Epoch: [2/10], Step: [400/1200], Loss: 1.2793
Epoch: [2/10], Step: [500/1200], Loss: 1.4281
Epoch: [2/10], Step: [600/1200], Loss: 1.2763
Epoch: [2/10], Step: [700/1200], Loss: 1.1570
Epoch: [2/10], Step: [800/1200], Loss: 1.1050
Epoch: [2/10], Step: [900/1200], Loss: 1.1151
Epoch: [2/10], Step: [1000/1200], Loss: 1.0385
Epoch: [2/10], Step: [1100/1200], Loss: 1.0978
Epoch: [2/10], Step: [1200/1200], Loss: 1.0007
Epoch: [3/10], Step: [100/1200], Loss: 1.1849
Epoch: [3/10], Step: [200/1200], Loss: 1.0002
Epoch: [3/10], Step: [300/1200], Loss: 1.0198
Epoch: [3/10], Step: [400/1200], Loss: 0.9248
Epoch: [3/10], Step: [500/1200], Loss: 0.8974
Epoch: [3/10], Step: [600/1200], Loss: 1.1095
Epoch: [3/10], Step: [700/1200], Loss: 1.0900
Epoch: [3/10], Step: [800/1200], Loss: 1.0178
Epoch: [3/10], Step: [900/1200], Loss: 0.9809
Epoch: [3/10], Step: [1000/1200], Loss: 0.9831
Epoch: [3/10], Step: [1100/1200], Loss: 0.8701
Epoch: [3/10], Step: [1200/1200], Loss: 0.9855
Epoch: [4/10], Step: [100/1200], Loss: 0.9081
Epoch: [4/10], Step: [200/1200], Loss: 0.8791
Epoch: [4/10], Step: [300/1200], Loss: 0.7540
Epoch: [4/10], Step: [400/1200], Loss: 0.9443
Epoch: [4/10], Step: [500/1200], Loss: 0.9346
Epoch: [4/10], Step: [600/1200], Loss: 0.8974
Epoch: [4/10], Step: [700/1200], Loss: 0.8897
Epoch: [4/10], Step: [800/1200], Loss: 0.7797
Epoch: [4/10], Step: [900/1200], Loss: 0.8608
Epoch: [4/10], Step: [1000/1200], Loss: 0.9216
Epoch: [4/10], Step: [1100/1200], Loss: 0.8676
Epoch: [4/10], Step: [1200/1200], Loss: 0.9251
Epoch: [5/10], Step: [100/1200], Loss: 0.7640
Epoch: [5/10], Step: [200/1200], Loss: 0.6955
Epoch: [5/10], Step: [300/1200], Loss: 0.8431
Epoch: [5/10], Step: [400/1200], Loss: 0.8489
Epoch: [5/10], Step: [500/1200], Loss: 0.7191
Epoch: [5/10], Step: [600/1200], Loss: 0.6671
Epoch: [5/10], Step: [700/1200], Loss: 0.6980
Epoch: [5/10], Step: [800/1200], Loss: 0.6837
Epoch: [5/10], Step: [900/1200], Loss: 0.9087
Epoch: [5/10], Step: [1000/1200], Loss: 0.7784
Epoch: [5/10], Step: [1100/1200], Loss: 0.7890
Epoch: [5/10], Step: [1200/1200], Loss: 1.0480
Epoch: [6/10], Step: [100/1200], Loss: 0.5834
Epoch: [6/10], Step: [200/1200], Loss: 0.8300
Epoch: [6/10], Step: [300/1200], Loss: 0.8316
Epoch: [6/10], Step: [400/1200], Loss: 0.7249
Epoch: [6/10], Step: [500/1200], Loss: 0.6184
Epoch: [6/10], Step: [600/1200], Loss: 0.7505
Epoch: [6/10], Step: [700/1200], Loss: 0.6599
Epoch: [6/10], Step: [800/1200], Loss: 0.7170
Epoch: [6/10], Step: [900/1200], Loss: 0.6857
Epoch: [6/10], Step: [1000/1200], Loss: 0.6543
Epoch: [6/10], Step: [1100/1200], Loss: 0.5679
Epoch: [6/10], Step: [1200/1200], Loss: 0.8261
Epoch: [7/10], Step: [100/1200], Loss: 0.7144
Epoch: [7/10], Step: [200/1200], Loss: 0.7573
Epoch: [7/10], Step: [300/1200], Loss: 0.7254
Epoch: [7/10], Step: [400/1200], Loss: 0.5918
Epoch: [7/10], Step: [500/1200], Loss: 0.6959
Epoch: [7/10], Step: [600/1200], Loss: 0.7058
Epoch: [7/10], Step: [700/1200], Loss: 0.7382
Epoch: [7/10], Step: [800/1200], Loss: 0.7282
Epoch: [7/10], Step: [900/1200], Loss: 0.6750
Epoch: [7/10], Step: [1000/1200], Loss: 0.6019
Epoch: [7/10], Step: [1100/1200], Loss: 0.6615
Epoch: [7/10], Step: [1200/1200], Loss: 0.5851
Epoch: [8/10], Step: [100/1200], Loss: 0.6492
Epoch: [8/10], Step: [200/1200], Loss: 0.5439
Epoch: [8/10], Step: [300/1200], Loss: 0.6613
Epoch: [8/10], Step: [400/1200], Loss: 0.6486
Epoch: [8/10], Step: [500/1200], Loss: 0.8281
Epoch: [8/10], Step: [600/1200], Loss: 0.6263
Epoch: [8/10], Step: [700/1200], Loss: 0.6541
Epoch: [8/10], Step: [800/1200], Loss: 0.5080
Epoch: [8/10], Step: [900/1200], Loss: 0.7020
Epoch: [8/10], Step: [1000/1200], Loss: 0.6421
Epoch: [8/10], Step: [1100/1200], Loss: 0.6207
Epoch: [8/10], Step: [1200/1200], Loss: 0.9254
Epoch: [9/10], Step: [100/1200], Loss: 0.7428
Epoch: [9/10], Step: [200/1200], Loss: 0.6815
Epoch: [9/10], Step: [300/1200], Loss: 0.6418
Epoch: [9/10], Step: [400/1200], Loss: 0.7096
Epoch: [9/10], Step: [500/1200], Loss: 0.6846
Epoch: [9/10], Step: [600/1200], Loss: 0.5124
Epoch: [9/10], Step: [700/1200], Loss: 0.6300
Epoch: [9/10], Step: [800/1200], Loss: 0.6340
Epoch: [9/10], Step: [900/1200], Loss: 0.5593
Epoch: [9/10], Step: [1000/1200], Loss: 0.5706
Epoch: [9/10], Step: [1100/1200], Loss: 0.6258
Epoch: [9/10], Step: [1200/1200], Loss: 0.7627
Epoch: [10/10], Step: [100/1200], Loss: 0.5254
Epoch: [10/10], Step: [200/1200], Loss: 0.5318
Epoch: [10/10], Step: [300/1200], Loss: 0.5448
Epoch: [10/10], Step: [400/1200], Loss: 0.5634
Epoch: [10/10], Step: [500/1200], Loss: 0.6398
Epoch: [10/10], Step: [600/1200], Loss: 0.7158
Epoch: [10/10], Step: [700/1200], Loss: 0.6169
Epoch: [10/10], Step: [800/1200], Loss: 0.5641
Epoch: [10/10], Step: [900/1200], Loss: 0.5698
Epoch: [10/10], Step: [1000/1200], Loss: 0.5612
Epoch: [10/10], Step: [1100/1200], Loss: 0.5126
Epoch: [10/10], Step: [1200/1200], Loss: 0.6746
Accuracy of the model on the 10000 test images: 87 %Process finished with exit code 0

这篇关于【python pytorch】Pytorch实现逻辑回归的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1144288

相关文章

python: 多模块(.py)中全局变量的导入

文章目录 global关键字可变类型和不可变类型数据的内存地址单模块(单个py文件)的全局变量示例总结 多模块(多个py文件)的全局变量from x import x导入全局变量示例 import x导入全局变量示例 总结 global关键字 global 的作用范围是模块(.py)级别: 当你在一个模块(文件)中使用 global 声明变量时,这个变量只在该模块的全局命名空

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu

【C++】_list常用方法解析及模拟实现

相信自己的力量,只要对自己始终保持信心,尽自己最大努力去完成任何事,就算事情最终结果是失败了,努力了也不留遗憾。💓💓💓 目录   ✨说在前面 🍋知识点一:什么是list? •🌰1.list的定义 •🌰2.list的基本特性 •🌰3.常用接口介绍 🍋知识点二:list常用接口 •🌰1.默认成员函数 🔥构造函数(⭐) 🔥析构函数 •🌰2.list对象

【Prometheus】PromQL向量匹配实现不同标签的向量数据进行运算

✨✨ 欢迎大家来到景天科技苑✨✨ 🎈🎈 养成好习惯,先赞后看哦~🎈🎈 🏆 作者简介:景天科技苑 🏆《头衔》:大厂架构师,华为云开发者社区专家博主,阿里云开发者社区专家博主,CSDN全栈领域优质创作者,掘金优秀博主,51CTO博客专家等。 🏆《博客》:Python全栈,前后端开发,小程序开发,人工智能,js逆向,App逆向,网络系统安全,数据分析,Django,fastapi

让树莓派智能语音助手实现定时提醒功能

最初的时候是想直接在rasa 的chatbot上实现,因为rasa本身是带有remindschedule模块的。不过经过一番折腾后,忽然发现,chatbot上实现的定时,语音助手不一定会有响应。因为,我目前语音助手的代码设置了长时间无应答会结束对话,这样一来,chatbot定时提醒的触发就不会被语音助手获悉。那怎么让语音助手也具有定时提醒功能呢? 我最后选择的方法是用threading.Time

【Python编程】Linux创建虚拟环境并配置与notebook相连接

1.创建 使用 venv 创建虚拟环境。例如,在当前目录下创建一个名为 myenv 的虚拟环境: python3 -m venv myenv 2.激活 激活虚拟环境使其成为当前终端会话的活动环境。运行: source myenv/bin/activate 3.与notebook连接 在虚拟环境中,使用 pip 安装 Jupyter 和 ipykernel: pip instal

Android实现任意版本设置默认的锁屏壁纸和桌面壁纸(两张壁纸可不一致)

客户有些需求需要设置默认壁纸和锁屏壁纸  在默认情况下 这两个壁纸是相同的  如果需要默认的锁屏壁纸和桌面壁纸不一样 需要额外修改 Android13实现 替换默认桌面壁纸: 将图片文件替换frameworks/base/core/res/res/drawable-nodpi/default_wallpaper.*  (注意不能是bmp格式) 替换默认锁屏壁纸: 将图片资源放入vendo

C#实战|大乐透选号器[6]:实现实时显示已选择的红蓝球数量

哈喽,你好啊,我是雷工。 关于大乐透选号器在前面已经记录了5篇笔记,这是第6篇; 接下来实现实时显示当前选中红球数量,蓝球数量; 以下为练习笔记。 01 效果演示 当选择和取消选择红球或蓝球时,在对应的位置显示实时已选择的红球、蓝球的数量; 02 标签名称 分别设置Label标签名称为:lblRedCount、lblBlueCount

【机器学习】高斯过程的基本概念和应用领域以及在python中的实例

引言 高斯过程(Gaussian Process,简称GP)是一种概率模型,用于描述一组随机变量的联合概率分布,其中任何一个有限维度的子集都具有高斯分布 文章目录 引言一、高斯过程1.1 基本定义1.1.1 随机过程1.1.2 高斯分布 1.2 高斯过程的特性1.2.1 联合高斯性1.2.2 均值函数1.2.3 协方差函数(或核函数) 1.3 核函数1.4 高斯过程回归(Gauss

Kubernetes PodSecurityPolicy:PSP能实现的5种主要安全策略

Kubernetes PodSecurityPolicy:PSP能实现的5种主要安全策略 1. 特权模式限制2. 宿主机资源隔离3. 用户和组管理4. 权限提升控制5. SELinux配置 💖The Begin💖点点关注,收藏不迷路💖 Kubernetes的PodSecurityPolicy(PSP)是一个关键的安全特性,它在Pod创建之前实施安全策略,确保P