【python pytorch】Pytorch实现逻辑回归

2024-09-07 06:18

本文主要是介绍【python pytorch】Pytorch实现逻辑回归,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

pytorch 逻辑回归学习demo:

import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable# Hyper Parameters 
input_size = 784
num_classes = 10
num_epochs = 10
batch_size = 50
learning_rate = 0.001# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(),download=True)print(train_dataset)test_dataset = dsets.MNIST(root='./data', train=False, transform=transforms.ToTensor())# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)# Model
class LogisticRegression(nn.Module):def __init__(self, input_size, num_classes):super(LogisticRegression, self).__init__()self.linear = nn.Linear(input_size, num_classes)def forward(self, x):out = self.linear(x)return outmodel = LogisticRegression(input_size, num_classes)# Loss and Optimizer
# Softmax is internally computed.
# Set parameters to be updated.
criterion = nn.CrossEntropyLoss()  
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)  # Training the Model
for epoch in range(num_epochs):for i, (images, labels) in enumerate(train_loader):images = Variable(images.view(-1, 28*28))labels = Variable(labels)# Forward + Backward + Optimizeoptimizer.zero_grad()outputs = model(images)loss = criterion(outputs, labels)loss.backward()optimizer.step()if (i+1) % 100 == 0:print ('Epoch: [%d/%d], Step: [%d/%d], Loss: %.4f' % (epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))# Test the Model
correct = 0
total = 0
for images, labels in test_loader:images = Variable(images.view(-1, 28*28))outputs = model(images)_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).sum()print('Accuracy of the model on the 10000 test images: %d %%' % (100 * correct / total))# Save the Model
torch.save(model.state_dict(), 'model.pkl')

运行结果:

Epoch: [1/10], Step: [100/1200], Loss: 2.2397
Epoch: [1/10], Step: [200/1200], Loss: 2.1378
Epoch: [1/10], Step: [300/1200], Loss: 2.0500
Epoch: [1/10], Step: [400/1200], Loss: 1.9401
Epoch: [1/10], Step: [500/1200], Loss: 1.9175
Epoch: [1/10], Step: [600/1200], Loss: 1.8203
Epoch: [1/10], Step: [700/1200], Loss: 1.7322
Epoch: [1/10], Step: [800/1200], Loss: 1.6910
Epoch: [1/10], Step: [900/1200], Loss: 1.6678
Epoch: [1/10], Step: [1000/1200], Loss: 1.5577
Epoch: [1/10], Step: [1100/1200], Loss: 1.5113
Epoch: [1/10], Step: [1200/1200], Loss: 1.5671
Epoch: [2/10], Step: [100/1200], Loss: 1.4560
Epoch: [2/10], Step: [200/1200], Loss: 1.3170
Epoch: [2/10], Step: [300/1200], Loss: 1.3822
Epoch: [2/10], Step: [400/1200], Loss: 1.2793
Epoch: [2/10], Step: [500/1200], Loss: 1.4281
Epoch: [2/10], Step: [600/1200], Loss: 1.2763
Epoch: [2/10], Step: [700/1200], Loss: 1.1570
Epoch: [2/10], Step: [800/1200], Loss: 1.1050
Epoch: [2/10], Step: [900/1200], Loss: 1.1151
Epoch: [2/10], Step: [1000/1200], Loss: 1.0385
Epoch: [2/10], Step: [1100/1200], Loss: 1.0978
Epoch: [2/10], Step: [1200/1200], Loss: 1.0007
Epoch: [3/10], Step: [100/1200], Loss: 1.1849
Epoch: [3/10], Step: [200/1200], Loss: 1.0002
Epoch: [3/10], Step: [300/1200], Loss: 1.0198
Epoch: [3/10], Step: [400/1200], Loss: 0.9248
Epoch: [3/10], Step: [500/1200], Loss: 0.8974
Epoch: [3/10], Step: [600/1200], Loss: 1.1095
Epoch: [3/10], Step: [700/1200], Loss: 1.0900
Epoch: [3/10], Step: [800/1200], Loss: 1.0178
Epoch: [3/10], Step: [900/1200], Loss: 0.9809
Epoch: [3/10], Step: [1000/1200], Loss: 0.9831
Epoch: [3/10], Step: [1100/1200], Loss: 0.8701
Epoch: [3/10], Step: [1200/1200], Loss: 0.9855
Epoch: [4/10], Step: [100/1200], Loss: 0.9081
Epoch: [4/10], Step: [200/1200], Loss: 0.8791
Epoch: [4/10], Step: [300/1200], Loss: 0.7540
Epoch: [4/10], Step: [400/1200], Loss: 0.9443
Epoch: [4/10], Step: [500/1200], Loss: 0.9346
Epoch: [4/10], Step: [600/1200], Loss: 0.8974
Epoch: [4/10], Step: [700/1200], Loss: 0.8897
Epoch: [4/10], Step: [800/1200], Loss: 0.7797
Epoch: [4/10], Step: [900/1200], Loss: 0.8608
Epoch: [4/10], Step: [1000/1200], Loss: 0.9216
Epoch: [4/10], Step: [1100/1200], Loss: 0.8676
Epoch: [4/10], Step: [1200/1200], Loss: 0.9251
Epoch: [5/10], Step: [100/1200], Loss: 0.7640
Epoch: [5/10], Step: [200/1200], Loss: 0.6955
Epoch: [5/10], Step: [300/1200], Loss: 0.8431
Epoch: [5/10], Step: [400/1200], Loss: 0.8489
Epoch: [5/10], Step: [500/1200], Loss: 0.7191
Epoch: [5/10], Step: [600/1200], Loss: 0.6671
Epoch: [5/10], Step: [700/1200], Loss: 0.6980
Epoch: [5/10], Step: [800/1200], Loss: 0.6837
Epoch: [5/10], Step: [900/1200], Loss: 0.9087
Epoch: [5/10], Step: [1000/1200], Loss: 0.7784
Epoch: [5/10], Step: [1100/1200], Loss: 0.7890
Epoch: [5/10], Step: [1200/1200], Loss: 1.0480
Epoch: [6/10], Step: [100/1200], Loss: 0.5834
Epoch: [6/10], Step: [200/1200], Loss: 0.8300
Epoch: [6/10], Step: [300/1200], Loss: 0.8316
Epoch: [6/10], Step: [400/1200], Loss: 0.7249
Epoch: [6/10], Step: [500/1200], Loss: 0.6184
Epoch: [6/10], Step: [600/1200], Loss: 0.7505
Epoch: [6/10], Step: [700/1200], Loss: 0.6599
Epoch: [6/10], Step: [800/1200], Loss: 0.7170
Epoch: [6/10], Step: [900/1200], Loss: 0.6857
Epoch: [6/10], Step: [1000/1200], Loss: 0.6543
Epoch: [6/10], Step: [1100/1200], Loss: 0.5679
Epoch: [6/10], Step: [1200/1200], Loss: 0.8261
Epoch: [7/10], Step: [100/1200], Loss: 0.7144
Epoch: [7/10], Step: [200/1200], Loss: 0.7573
Epoch: [7/10], Step: [300/1200], Loss: 0.7254
Epoch: [7/10], Step: [400/1200], Loss: 0.5918
Epoch: [7/10], Step: [500/1200], Loss: 0.6959
Epoch: [7/10], Step: [600/1200], Loss: 0.7058
Epoch: [7/10], Step: [700/1200], Loss: 0.7382
Epoch: [7/10], Step: [800/1200], Loss: 0.7282
Epoch: [7/10], Step: [900/1200], Loss: 0.6750
Epoch: [7/10], Step: [1000/1200], Loss: 0.6019
Epoch: [7/10], Step: [1100/1200], Loss: 0.6615
Epoch: [7/10], Step: [1200/1200], Loss: 0.5851
Epoch: [8/10], Step: [100/1200], Loss: 0.6492
Epoch: [8/10], Step: [200/1200], Loss: 0.5439
Epoch: [8/10], Step: [300/1200], Loss: 0.6613
Epoch: [8/10], Step: [400/1200], Loss: 0.6486
Epoch: [8/10], Step: [500/1200], Loss: 0.8281
Epoch: [8/10], Step: [600/1200], Loss: 0.6263
Epoch: [8/10], Step: [700/1200], Loss: 0.6541
Epoch: [8/10], Step: [800/1200], Loss: 0.5080
Epoch: [8/10], Step: [900/1200], Loss: 0.7020
Epoch: [8/10], Step: [1000/1200], Loss: 0.6421
Epoch: [8/10], Step: [1100/1200], Loss: 0.6207
Epoch: [8/10], Step: [1200/1200], Loss: 0.9254
Epoch: [9/10], Step: [100/1200], Loss: 0.7428
Epoch: [9/10], Step: [200/1200], Loss: 0.6815
Epoch: [9/10], Step: [300/1200], Loss: 0.6418
Epoch: [9/10], Step: [400/1200], Loss: 0.7096
Epoch: [9/10], Step: [500/1200], Loss: 0.6846
Epoch: [9/10], Step: [600/1200], Loss: 0.5124
Epoch: [9/10], Step: [700/1200], Loss: 0.6300
Epoch: [9/10], Step: [800/1200], Loss: 0.6340
Epoch: [9/10], Step: [900/1200], Loss: 0.5593
Epoch: [9/10], Step: [1000/1200], Loss: 0.5706
Epoch: [9/10], Step: [1100/1200], Loss: 0.6258
Epoch: [9/10], Step: [1200/1200], Loss: 0.7627
Epoch: [10/10], Step: [100/1200], Loss: 0.5254
Epoch: [10/10], Step: [200/1200], Loss: 0.5318
Epoch: [10/10], Step: [300/1200], Loss: 0.5448
Epoch: [10/10], Step: [400/1200], Loss: 0.5634
Epoch: [10/10], Step: [500/1200], Loss: 0.6398
Epoch: [10/10], Step: [600/1200], Loss: 0.7158
Epoch: [10/10], Step: [700/1200], Loss: 0.6169
Epoch: [10/10], Step: [800/1200], Loss: 0.5641
Epoch: [10/10], Step: [900/1200], Loss: 0.5698
Epoch: [10/10], Step: [1000/1200], Loss: 0.5612
Epoch: [10/10], Step: [1100/1200], Loss: 0.5126
Epoch: [10/10], Step: [1200/1200], Loss: 0.6746
Accuracy of the model on the 10000 test images: 87 %Process finished with exit code 0

这篇关于【python pytorch】Pytorch实现逻辑回归的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1144288

相关文章

SpringBoot集成redisson实现延时队列教程

《SpringBoot集成redisson实现延时队列教程》文章介绍了使用Redisson实现延迟队列的完整步骤,包括依赖导入、Redis配置、工具类封装、业务枚举定义、执行器实现、Bean创建、消费... 目录1、先给项目导入Redisson依赖2、配置redis3、创建 RedissonConfig 配

Python的Darts库实现时间序列预测

《Python的Darts库实现时间序列预测》Darts一个集统计、机器学习与深度学习模型于一体的Python时间序列预测库,本文主要介绍了Python的Darts库实现时间序列预测,感兴趣的可以了解... 目录目录一、什么是 Darts?二、安装与基本配置安装 Darts导入基础模块三、时间序列数据结构与

Python正则表达式匹配和替换的操作指南

《Python正则表达式匹配和替换的操作指南》正则表达式是处理文本的强大工具,Python通过re模块提供了完整的正则表达式功能,本文将通过代码示例详细介绍Python中的正则匹配和替换操作,需要的朋... 目录基础语法导入re模块基本元字符常用匹配方法1. re.match() - 从字符串开头匹配2.

Python使用FastAPI实现大文件分片上传与断点续传功能

《Python使用FastAPI实现大文件分片上传与断点续传功能》大文件直传常遇到超时、网络抖动失败、失败后只能重传的问题,分片上传+断点续传可以把大文件拆成若干小块逐个上传,并在中断后从已完成分片继... 目录一、接口设计二、服务端实现(FastAPI)2.1 运行环境2.2 目录结构建议2.3 serv

C#实现千万数据秒级导入的代码

《C#实现千万数据秒级导入的代码》在实际开发中excel导入很常见,现代社会中很容易遇到大数据处理业务,所以本文我就给大家分享一下千万数据秒级导入怎么实现,文中有详细的代码示例供大家参考,需要的朋友可... 目录前言一、数据存储二、处理逻辑优化前代码处理逻辑优化后的代码总结前言在实际开发中excel导入很

通过Docker容器部署Python环境的全流程

《通过Docker容器部署Python环境的全流程》在现代化开发流程中,Docker因其轻量化、环境隔离和跨平台一致性的特性,已成为部署Python应用的标准工具,本文将详细演示如何通过Docker容... 目录引言一、docker与python的协同优势二、核心步骤详解三、进阶配置技巧四、生产环境最佳实践

Python一次性将指定版本所有包上传PyPI镜像解决方案

《Python一次性将指定版本所有包上传PyPI镜像解决方案》本文主要介绍了一个安全、完整、可离线部署的解决方案,用于一次性准备指定Python版本的所有包,然后导出到内网环境,感兴趣的小伙伴可以跟随... 目录为什么需要这个方案完整解决方案1. 项目目录结构2. 创建智能下载脚本3. 创建包清单生成脚本4

SpringBoot+RustFS 实现文件切片极速上传的实例代码

《SpringBoot+RustFS实现文件切片极速上传的实例代码》本文介绍利用SpringBoot和RustFS构建高性能文件切片上传系统,实现大文件秒传、断点续传和分片上传等功能,具有一定的参考... 目录一、为什么选择 RustFS + SpringBoot?二、环境准备与部署2.1 安装 RustF

Nginx部署HTTP/3的实现步骤

《Nginx部署HTTP/3的实现步骤》本文介绍了在Nginx中部署HTTP/3的详细步骤,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学... 目录前提条件第一步:安装必要的依赖库第二步:获取并构建 BoringSSL第三步:获取 Nginx

MyBatis Plus实现时间字段自动填充的完整方案

《MyBatisPlus实现时间字段自动填充的完整方案》在日常开发中,我们经常需要记录数据的创建时间和更新时间,传统的做法是在每次插入或更新操作时手动设置这些时间字段,这种方式不仅繁琐,还容易遗漏,... 目录前言解决目标技术栈实现步骤1. 实体类注解配置2. 创建元数据处理器3. 服务层代码优化填充机制详