深度学习本科课程 实验2 前馈神经网络

2024-02-05 23:20

本文主要是介绍深度学习本科课程 实验2 前馈神经网络,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

任务 3.3 课程实验要求 (1)手动实现前馈神经网络解决上述回归、二分类、多分类任务
l 从训练时间、预测精度、Loss变化等角度分析实验结果(最好使用图表展示)
(2)利用torch.nn实现前馈神经网络解决上述回归、二分类、多分类任务 l 从训练时间、预测精度、Loss变化等角度分析实验结果(最好使用图表展示)
(3)在多分类任务中使用至少三种不同的激活函数 l 使用不同的激活函数,进行对比实验并分析实验结果
(4)对多分类任务中的模型评估隐藏层层数和隐藏单元个数对实验结果的影响
l 使用不同的隐藏层层数和隐藏单元个数,进行对比实验并分析实验结果

一、手动生成实现前馈神经网络解决回归、二分类、多分类任务

1.1 任务内容

分析实验结果并绘制训练集和测试集的loss曲线

1.2 任务思路及代码

# 导入模块
import torch
import torch.nn as nn
import numpy as np
import torchvision
from torchvision import transforms
import time
# 定义绘图函数
import matplotlib.pyplot as plt
def draw_loss(train_loss, test_loss):x = np.linspace(0, len(train_loss), len(train_loss))plt.plot(x, train_loss, label="Train Loss", linewidth=1.5)plt.plot(x, test_loss, label="Test Loss", linewidth=1.5)plt.xlabel("Epoch")plt.ylabel("Loss")plt.legend()plt.show()
# 定义评价函数
def evaluate_accuracy(data_iter, model, loss_func):acc_sum, test_l_sum, n, c = 0.0, 0.0, 0, 0for X, y in data_iter:result = model.forward(X)acc_sum += (result.argmax(dim=1)==y).float().sum().item()test_l_sum += loss_func(result, y).item()n += y.shape[0]c += 1return acc_sum/n, test_l_sum/c
# 回归任务
n_train = 7000
n_test = 3000
num_inputs = 500
true_w, true_b = torch.ones(num_inputs, 1)*0.0056, 0.028# 生成数据集
features = torch.randn((n_train + n_test, num_inputs))
labels = torch.matmul(features, true_w) + true_b
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
# 数据划分
train_features, test_features = features[:n_train, :], features[n_train:, :]
train_labels, test_labels = labels[:n_train], labels[n_train:]
batch_size1 = 128traindataset1 = torch.utils.data.TensorDataset(train_features, train_labels)
testdataset1 = torch.utils.data.TensorDataset(test_features, test_labels)traindataloader1 = torch.utils.data.DataLoader(dataset=traindataset1, batch_size=batch_size1, shuffle=True)
testdataloader1 = torch.utils.data.DataLoader(dataset=testdataset1, batch_size=batch_size1, shuffle=False)
# 定义损失函数
def my_cross_entropy_loss(y_hat, labels):def log_softmax(y_hat):max_v = torch.max(y_hat, dim=1).values.unsqueeze(dim=1)return y_hat - max_v - torch.log(torch.exp(y_hat-max_v).sum(dim=1).unsqueeze(dim=1))return (-log_softmax(y_hat))[range(len(y_hat)), labels].mean()# 定义优化算法
def SGD(params, lr):for param in params:param.data -= lr*param.graddef mse(pred, true):ans = torch.sum((true-pred)**2)/len(pred)return ans
class Net1():def __init__(self):# 设置隐藏层和输出层的节点数num_inputs, num_hiddens, num_outputs = 500, 256, 1w_1 = torch.tensor(np.random.normal(0,0.01,(num_hiddens,num_inputs)),dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32,requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01,(num_outputs, num_hiddens)),dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs,dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定义模型结构self.input_layer = lambda x: x.view(x.shape[0],-1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x,w_1.t())+b_1)self.output_layer = lambda x: torch.matmul(x,w_2.t()) + b_2def my_relu(self, x):return torch.max(input=x,other=torch.tensor(0.0))def forward(self, x):flatten_input = self.input_layer(x)hidden_output = self.hidden_layer(flatten_input)final_output = self.output_layer(hidden_output)return final_output# 训练
model1 = Net1()  # logistics模型
criterion = my_cross_entropy_loss
lr = 0.01   # 学习率
batchsize = 128 
epochs = 40 #训练轮数
train_all_loss1 = [] # 记录训练集上得loss变化
test_all_loss1 = [] #记录测试集上的loss变化
begintime1 = time.time()
for epoch in range(epochs):train_l = 0for data, labels in traindataloader1:pred = model1.forward(data)train_each_loss = mse(pred.view(-1,1), labels.view(-1,1)) #计算每次的损失值train_each_loss.backward() # 反向传播SGD(model1.params, lr) # 使用小批量随机梯度下降迭代模型参数# 梯度清零train_l += train_each_loss.item()for param in model1.params:param.grad.data.zero_()# print(train_each_loss)train_all_loss1.append(train_l) # 添加损失值到列表中with torch.no_grad():test_loss = 0for data, labels in traindataloader1:pred = model1.forward(data)test_each_loss = mse(pred, labels)test_loss += test_each_loss.item()test_all_loss1.append(test_loss)if epoch==0 or (epoch+1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f'%(epoch+1,train_all_loss1[-1],test_all_loss1[-1]))endtime1 = time.time()
print("手动实现前馈网络-回归实验 %d轮 总用时: %.3fs"%(epochs,endtime1-begintime1))
# 二分类任务
data_num2, train_num2, test_num2 =10000, 7000, 3000
# 第一个数据集 符合均值为 0.5 标准差为1 得分布
featuresA = torch.normal(mean=0.5, std=1, size=(data_num2, 200), dtype=torch.float32)
labelsA = torch.ones(data_num2)
# 第二个数据集 符合均值为 -0.5 标准差为1的分布
featuresB = torch.normal(mean=-0.5, std=1, size=(data_num2, 200), dtype=torch.float32)
labelsB = torch.zeros(data_num2)# 构建训练数据集
train_features2 = torch.cat((featuresA[:train_num2], featuresB[:train_num2]), dim=0) 
train_labels2 = torch.cat((labelsA[:train_num2], labelsB[:train_num2]), dim=-1) 
# 构建测试数据集
test_features2 = torch.cat((featuresA[train_num2:], featuresB[train_num2:]), dim=0)  
test_labels2 = torch.cat((labelsB[train_num2:], labelsB[train_num2:]), dim=-1) 
batch_size = 128
# Build the training and testing dataset
traindataset2 = torch.utils.data.TensorDataset(train_features2, train_labels2)
testdataset2 = torch.utils.data.TensorDataset(test_features2, test_labels2)
traindataloader2 = torch.utils.data.DataLoader(dataset=traindataset2,batch_size=batch_size,shuffle=True)
testdataloader2 = torch.utils.data.DataLoader(dataset=testdataset2,batch_size=batch_size,shuffle=True)
from torch.nn.functional import binary_cross_entropy
from torch.nn import CrossEntropyLoss
class Net2():def __init__(self):# 设置隐藏层和输出层的节点数num_inputs, num_hiddens, num_outputs = 200, 256, 1w_1 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_inputs)), dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32, requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01, (num_outputs, num_hiddens)), dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs, dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定义模型结构self.input_layer = lambda x: x.view(x.shape[0], -1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x, w_1.t()) + b_1)self.output_layer = lambda x: torch.matmul(x, w_2.t()) + b_2self.fn_logistic = self.logisticdef my_relu(self, x):return torch.max(input=x, other=torch.tensor(0.0))def logistic(self, x):  # 定义logistic函数x = 1.0 / (1.0 + torch.exp(-x))return x# 定义前向传播def forward(self, x):x = self.input_layer(x)x = self.my_relu(self.hidden_layer(x))x = self.fn_logistic(self.output_layer(x))return x# 训练
model2 = Net2()
lr = 0.005  # 学习率
epochs = 40  # 训练轮数
train_all_loss2 = []  # 记录训练集上得loss变化
test_all_loss2 = []  # 记录测试集上的loss变化
train_Acc12, test_Acc12 = [], []
begintime2 = time.time()
for epoch in range(epochs):train_l, train_epoch_count = 0, 0for data, labels in traindataloader2:pred = model2.forward(data)train_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))  # 计算每次的损失值train_l += train_each_loss.item()train_each_loss.backward()  # 反向传播SGD(model2.params, lr)  # 使用随机梯度下降迭代模型参数# 梯度清零for param in model2.params:param.grad.data.zero_()# print(train_each_loss)train_epoch_count += (pred.argmax(dim=1) == labels).sum()train_Acc12.append((train_epoch_count/len(traindataset2)).item())train_all_loss2.append(train_l)  # 添加损失值到列表中with torch.no_grad():test_l, test_epoch_count = 0, 0for data, labels in testdataloader2:pred = model2.forward(data)test_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))test_l += test_each_loss.item()train_epoch_count += (pred.argmax(dim=1) == labels).sum()test_Acc12.append((test_epoch_count/len(testdataset2)).item())test_all_loss2.append(test_l)if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc:%.5f | test acc:%.5f'  % (epoch + 1, train_all_loss2[-1], test_all_loss2[-1], train_Acc12[-1], test_Acc12[-1]))
endtime2 = time.time()
print("手动实现前馈网络-二分类实验 %d轮 总用时: %.3f" % (epochs, endtime2 - begintime2))
# 多分类任务
mnist_train3 = torchvision.datasets.FashionMNIST(root='./FashionMNIST', train=True, download=True, transform=transforms.ToTensor())
mnist_test3 = torchvision.datasets.FashionMNIST(root='./FashionMNIST', train=False, download=True, transform=transforms.ToTensor())
batch_size = 256
train_iter3 = torch.utils.data.DataLoader(mnist_train3, batch_size=batch_size, shuffle=True, num_workers=0)
test_iter3 = torch.utils.data.DataLoader(mnist_test3, batch_size=batch_size, shuffle=False, num_workers=0)
traindataset3 = torchvision.datasets.FashionMNIST(root='E:\\DataSet\\FashionMNIST\\Train',train=True,download=True,transform=transforms.ToTensor())
testdataset3 = torchvision.datasets.FashionMNIST(root='E:\\DataSet\\FashionMNIST\\Test',train=False,download=True,transform=transforms.ToTensor())
traindataloader3 = torch.utils.data.DataLoader(traindataset3, batch_size=batch_size, shuffle=True)
testdataloader3 = torch.utils.data.DataLoader(testdataset3, batch_size=batch_size, shuffle=False)
# 定义自己的前馈神经网络
class MyNet3():def __init__(self):# 设置隐藏层和输出层的节点数num_inputs, num_hiddens, num_outputs = 28 * 28, 256, 10  # 十分类问题w_1 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_inputs)), dtype=torch.float32,requires_grad=True)b_1 = torch.zeros(num_hiddens, dtype=torch.float32, requires_grad=True)w_2 = torch.tensor(np.random.normal(0, 0.01, (num_outputs, num_hiddens)), dtype=torch.float32,requires_grad=True)b_2 = torch.zeros(num_outputs, dtype=torch.float32, requires_grad=True)self.params = [w_1, b_1, w_2, b_2]# 定义模型结构self.input_layer = lambda x: x.view(x.shape[0], -1)self.hidden_layer = lambda x: self.my_relu(torch.matmul(x, w_1.t()) + b_1)self.output_layer = lambda x: torch.matmul(x, w_2.t()) + b_2def my_relu(self, x):return torch.max(input=x, other=torch.tensor(0.0))# 定义前向传播def forward(self, x):x = self.input_layer(x)x = self.hidden_layer(x)x = self.output_layer(x)return xdef mySGD(params, lr, batchsize):for param in params:param.data -= lr * param.grad / batchsize# 训练
model3 = MyNet3()  # logistics模型
criterion = my_cross_entropy_loss  # 损失函数
lr = 0.15  # 学习率
epochs = 40  # 训练轮数
train_all_loss3 = []  # 记录训练集上得loss变化
test_all_loss3 = []  # 记录测试集上的loss变化
train_ACC13, test_ACC13 = [], [] # 记录正确的个数
begintime3 = time.time()
for epoch in range(epochs):train_l,train_acc_num = 0, 0for data, labels in traindataloader3:pred = model3.forward(data)train_each_loss = criterion(pred, labels)  # 计算每次的损失值train_l += train_each_loss.item()train_each_loss.backward()  # 反向传播mySGD(model3.params, lr, 128)  # 使用小批量随机梯度下降迭代模型参数# 梯度清零train_acc_num += (pred.argmax(dim=1)==labels).sum().item()for param in model3.params:param.grad.data.zero_()# print(train_each_loss)train_all_loss3.append(train_l)  # 添加损失值到列表中train_ACC13.append(train_acc_num / len(traindataset3)) # 添加准确率到列表中with torch.no_grad():test_l, test_acc_num = 0, 0for data, labels in testdataloader3:pred = model3.forward(data)test_each_loss = criterion(pred, labels)test_l += test_each_loss.item()test_acc_num += (pred.argmax(dim=1)==labels).sum().item()test_all_loss3.append(test_l)test_ACC13.append(test_acc_num / len(testdataset3))   # # 添加准确率到列表中if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc: %.2f | test acc: %.2f'% (epoch + 1, train_l, test_l, train_ACC13[-1],test_ACC13[-1]))
endtime3 = time.time()
print("手动实现前馈网络-多分类实验 %d轮 总用时: %.3f" % (epochs, endtime3 - begintime3))
# 结果分析
def picture(name, trainl, testl, type='Loss'):plt.rcParams["font.sans-serif"]=["SimHei"] #设置字体plt.rcParams["axes.unicode_minus"]=False #该语句解决图像中的“-”负号的乱码问题plt.title(name) # 命名plt.plot(trainl, c='g', label='Train '+ type)plt.plot(testl, c='r', label='Test '+type)plt.xlabel('Epoch')plt.ylabel('Loss')plt.legend()plt.grid(True)plt.figure(figsize=(12,3))
plt.title('Loss')
plt.subplot(131)
picture('前馈网络-回归-损失曲线',train_all_loss1,test_all_loss1)
plt.subplot(132)
picture('前馈网络-二分类-损失曲线',train_all_loss2,test_all_loss2)
plt.subplot(133)
picture('前馈网络-多分类-损失曲线',train_all_loss3,test_all_loss3)
plt.show()# 绘制表格
plt.figure(figsize=(8, 3))
plt.subplot(121)
picture('前馈网络-二分类-准确度',train_Acc12,test_Acc12,type='ACC')
plt.subplot(122)
picture('前馈网络-多分类—准确度', train_ACC13,test_ACC13, type='ACC')
plt.show()

二、利用torch.nn实现前馈神经网络解决回归、二分类、多分类任务

2.1 任务内容

从训练时间、预测精度、Loss变化等角度分析实验结果(最好使用图表展示)

2.2 任务思路及代码

from torch.nn import MSELoss
from torch.optim import SGD
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 回归任务
class MyNet21(nn.Module):def __init__(self):super(MyNet21, self).__init__()# 设置隐藏层和输出层的节点数num_inputs, num_hiddens, num_outputs = 500, 256, 1# 定义模型结构self.input_layer = nn.Flatten()self.hidden_layer = nn.Linear(num_inputs, num_hiddens)self.output_layer = nn.Linear(num_hiddens, num_outputs)self.relu = nn.ReLU()# 定义前向传播def forward(self, x):x = self.input_layer(x)x = self.relu(self.hidden_layer(x))x = self.output_layer(x)return x# 训练
model21 = MyNet21()  # logistics模型
model21 = model21.to(device)
print(model21)
criterion = MSELoss()  # 损失函数
criterion = criterion.to(device)
optimizer = SGD(model21.parameters(), lr=0.1)  # 优化函数
epochs = 40  # 训练轮数
train_all_loss21 = []  # 记录训练集上得loss变化
test_all_loss21 = []  # 记录测试集上的loss变化
begintime21 = time.time()
for epoch in range(epochs):train_l = 0for data, labels in traindataloader1:data, labels = data.to(device=device), labels.to(device)pred = model21(data)train_each_loss = criterion(pred.view(-1, 1), labels.view(-1, 1))  # 计算每次的损失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向传播optimizer.step()  # 梯度更新train_l += train_each_loss.item()train_all_loss21.append(train_l)  # 添加损失值到列表中with torch.no_grad():test_loss = 0for data, labels in testdataloader1:data, labels = data.to(device), labels.to(device)pred = model21(data)test_each_loss = criterion(pred,labels)test_loss += test_each_loss.item()test_all_loss21.append(test_loss)if epoch == 0 or (epoch + 1) % 10 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f' % (epoch + 1, train_all_loss21[-1], test_all_loss21[-1]))
endtime21 = time.time()
print("torch.nn实现前馈网络-回归实验 %d轮 总用时: %.3fs" % (epochs, endtime21 - begintime21))
class MyNet22(nn.Module):def __init__(self):super(MyNet22, self).__init__()# 设置隐藏层和输出层的节点数num_inputs, num_hiddens, num_outputs = 200, 256, 1# 定义模型结构self.input_layer = nn.Flatten()self.hidden_layer = nn.Linear(num_inputs, num_hiddens)self.output_layer = nn.Linear(num_hiddens, num_outputs)self.relu = nn.ReLU()def logistic(self, x):  # 定义logistic函数x = 1.0 / (1.0 + torch.exp(-x))return x# 定义前向传播def forward(self, x):x = self.input_layer(x)x = self.relu(self.hidden_layer(x))x = self.logistic(self.output_layer(x))return x# 训练
model22 = MyNet22()  # logistics模型
model22 = model22.to(device)
print(model22)
optimizer = SGD(model22.parameters(), lr=0.001)  # 优化函数
epochs = 40  # 训练轮数
train_all_loss22 = []  # 记录训练集上得loss变化
test_all_loss22 = []  # 记录测试集上的loss变化
train_ACC22, test_ACC22 = [], []
begintime22 = time.time()
for epoch in range(epochs):train_l, train_epoch_count, test_epoch_count = 0, 0, 0 # 每一轮的训练损失值 训练集正确个数 测试集正确个数for data, labels in traindataloader2:data, labels = data.to(device), labels.to(device)pred = model22(data)train_each_loss = binary_cross_entropy(pred.view(-1), labels.view(-1))  # 计算每次的损失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向传播optimizer.step()  # 梯度更新train_l += train_each_loss.item()pred = torch.tensor(np.where(pred.cpu()>0.5, 1, 0))  # 大于 0.5时候,预测标签为 1 否则为0each_count = (pred.view(-1) == labels.cpu()).sum() # 每一个batchsize的正确个数train_epoch_count += each_count # 计算每个epoch上的正确个数train_ACC22.append(train_epoch_count / len(traindataset2))train_all_loss22.append(train_l)  # 添加损失值到列表中with torch.no_grad():test_loss, each_count = 0, 0for data, labels in testdataloader2:data, labels = data.to(device), labels.to(device)pred = model22(data)test_each_loss = binary_cross_entropy(pred.view(-1),labels)test_loss += test_each_loss.item()# .cpu 为转换到cpu上计算pred = torch.tensor(np.where(pred.cpu() > 0.5, 1, 0))each_count = (pred.view(-1)==labels.cpu().view(-1)).sum()test_epoch_count += each_counttest_all_loss22.append(test_loss)test_ACC22.append(test_epoch_count / len(testdataset2))if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f test loss:%.5f | train acc:%.5f | test acc:%.5f' % (epoch + 1, train_all_loss22[-1], test_all_loss22[-1], train_ACC22[-1], test_ACC22[-1]))endtime22 = time.time()
print("torch.nn实现前馈网络-二分类实验 %d轮 总用时: %.3fs" % (epochs, endtime22 - begintime22))
from torch.nn import CrossEntropyLoss
# 定义自己的前馈神经网络
class MyNet23(nn.Module):def __init__(self,num_hiddenlayer=1, num_inputs=28*28,num_hiddens=[256],num_outs=10,act='relu'):super(MyNet23, self).__init__()# 设置隐藏层和输出层的节点数self.num_inputs, self.num_hiddens, self.num_outputs = num_inputs,num_hiddens,num_outs # 十分类问题# 定义模型结构self.input_layer = nn.Flatten()# 若只有一层隐藏层if num_hiddenlayer ==1:self.hidden_layers = nn.Linear(self.num_inputs,self.num_hiddens[-1])else: # 若有多个隐藏层self.hidden_layers = nn.Sequential()self.hidden_layers.add_module("hidden_layer1", nn.Linear(self.num_inputs,self.num_hiddens[0]))for i in range(0,num_hiddenlayer-1):name = str('hidden_layer'+str(i+2))self.hidden_layers.add_module(name, nn.Linear(self.num_hiddens[i],self.num_hiddens[i+1]))self.output_layer = nn.Linear(self.num_hiddens[-1], self.num_outputs)# 指代需要使用什么样子的激活函数if act == 'relu':self.act = nn.ReLU()elif act == 'sigmoid':self.act = nn.Sigmoid()elif act == 'tanh':self.act = nn.Tanh()elif act == 'elu':self.act = nn.ELU()print(f'本次使用的激活函数为 {act}')def logistic(self, x):  # 定义logistic函数x = 1.0 / (1.0 + torch.exp(-x))return x# 定义前向传播def forward(self, x):x = self.input_layer(x)x = self.act(self.hidden_layers(x))x = self.output_layer(x)return x# 训练
# 使用默认的参数即: num_inputs=28*28,num_hiddens=256,num_outs=10,act='relu'
model23 = MyNet23()  
model23 = model23.to(device)# 将训练过程定义为一个函数,方便实验三和实验四调用
def train_and_test(model=model23):MyModel = modelprint(MyModel)optimizer = SGD(MyModel.parameters(), lr=0.01)  # 优化函数epochs = 40  # 训练轮数criterion = CrossEntropyLoss() # 损失函数train_all_loss23 = []  # 记录训练集上得loss变化test_all_loss23 = []  # 记录测试集上的loss变化train_ACC23, test_ACC23 = [], []begintime23 = time.time()for epoch in range(epochs):train_l, train_epoch_count, test_epoch_count = 0, 0, 0for data, labels in traindataloader3:data, labels = data.to(device), labels.to(device)pred = MyModel(data)train_each_loss = criterion(pred, labels.view(-1))  # 计算每次的损失值optimizer.zero_grad()  # 梯度清零train_each_loss.backward()  # 反向传播optimizer.step()  # 梯度更新train_l += train_each_loss.item()train_epoch_count += (pred.argmax(dim=1)==labels).sum()train_ACC23.append(train_epoch_count.cpu()/len(traindataset3))train_all_loss23.append(train_l)  # 添加损失值到列表中with torch.no_grad():test_loss, test_epoch_count= 0, 0for data, labels in testdataloader3:data, labels = data.to(device), labels.to(device)pred = MyModel(data)test_each_loss = criterion(pred,labels)test_loss += test_each_loss.item()test_epoch_count += (pred.argmax(dim=1)==labels).sum()test_all_loss23.append(test_loss)test_ACC23.append(test_epoch_count.cpu()/len(testdataset3))if epoch == 0 or (epoch + 1) % 4 == 0:print('epoch: %d | train loss:%.5f | test loss:%.5f | train acc:%5f test acc:%.5f:' % (epoch + 1, train_all_loss23[-1], test_all_loss23[-1],train_ACC23[-1],test_ACC23[-1]))endtime23 = time.time()print("torch.nn实现前馈网络-多分类任务 %d轮 总用时: %.3fs" % (epochs, endtime23 - begintime23))# 返回训练集和测试集上的 损失值 与 准确率return train_all_loss23,test_all_loss23,train_ACC23,test_ACC23train_all_loss23,test_all_loss23,train_ACC23,test_ACC23 = train_and_test(model=model23)

三、在多分类任务中使用至少三种不同的激活函数

3.1 任务内容

使用不同的激活函数,进行对比实验并分析实验结果

3.2 任务思路及代码

# 使用实验二中多分类的模型定义其激活函数为 Tanh
model31 = MyNet23(1,28*28,[256],10,act='tanh') 
model31 = model31.to(device)train_all_loss31,test_all_loss31,train_ACC31,test_ACC31 = train_and_test(model=model31)
# 使用实验二中多分类的模型定义其激活函数为 Sigmoid
model32 = MyNet23(1,28*28,[256],10,act='sigmoid') 
model32 = model32.to(device)train_all_loss32,test_all_loss32,train_ACC32,test_ACC32 = train_and_test(model=model32)
# 使用实验二中多分类的模型定义其激活函数为 ELU
model33 = MyNet23(1,28*28,[256],10,act='elu') 
model33 = model33.to(device) train_all_loss33,test_all_loss33,train_ACC33,test_ACC33 = train_and_test(model=model33)
def Plot3(datalist,title='1',ylabel='Loss',flag='act'):plt.rcParams["font.sans-serif"]=["SimHei"] #设置字体plt.rcParams["axes.unicode_minus"]=False #该语句解决图像中的“-”负号的乱码问题plt.title(title)plt.xlabel('Epoch')plt.ylabel(ylabel)plt.plot(datalist[0],label='Tanh' if flag=='act' else '[128]')plt.plot(datalist[1],label='Sigmoid' if flag=='act' else '[512 256]')plt.plot(datalist[2],label='ELu' if flag=='act' else '[512 256 128 64]')plt.plot(datalist[3],label='Relu' if flag=='act' else '[256]')plt.legend()plt.grid(True)
plt.figure(figsize=(16,3))
plt.subplot(141)
Plot3([train_all_loss31,train_all_loss32,train_all_loss33,train_all_loss23],title='Train_Loss')
plt.subplot(142)
Plot3([test_all_loss31,test_all_loss32,test_all_loss33,test_all_loss23],title='Test_Loss')
plt.subplot(143)
Plot3([train_ACC31,train_ACC32,train_ACC33,train_ACC23],title='Train_ACC')
plt.subplot(144)
Plot3([test_ACC31,test_ACC32,test_ACC33,test_ACC23],title='Test_ACC')
plt.show()

四、在多分类任务中使用至少三种不同的激活函数

4.1 任务内容

使用不同的隐藏层层数和隐藏单元个数,进行对比实验并分析实验结果

# 使用实验二中多分类的模型  一个隐藏层,神经元个数为[128]
model41 = MyNet23(1,28*28,[128],10,act='relu') 
model41 = model41.to(device) train_all_loss41,test_all_loss41,train_ACC41,test_ACC41 = train_and_test(model=model41)
# 使用实验二中多分类的模型 两个隐藏层,神经元个数为[512,256]
model42 = MyNet23(2,28*28,[512,256],10,act='relu') 
model42 = model42.to(device)train_all_loss42,test_all_loss42,train_ACC42,test_ACC42 = train_and_test(model=model42)
# 使用实验二中多分类的模型  四个隐藏层,神经元个数为[512,256,128,64]
model43 = MyNet23(3,28*28,[512,256,128],10,act='relu') 
model43 = model43.to(device) train_all_loss43,test_all_loss43,train_ACC43,test_ACC43 = train_and_test(model=model43)
plt.figure(figsize=(16,3))
plt.subplot(141)
Plot3([train_all_loss41,train_all_loss42,train_all_loss43,train_all_loss23],title='Train_Loss',flag='hidden')
plt.subplot(142)
Plot3([test_all_loss41,test_all_loss42,test_all_loss43,test_all_loss23],title='Test_Loss',flag='hidden')
plt.subplot(143)
Plot3([train_ACC41,train_ACC42,train_ACC43,train_ACC23],title='Train_ACC',flag='hidden')
plt.subplot(144)
Plot3([test_ACC41,test_ACC42,test_ACC43,test_ACC23],title='Test_ACC', flag='hidden')
plt.show()

实验结果分析

  1. 从训练时间大致可以看出:隐藏层数越多,隐藏神经元个数越多,训练成本越高,所需要的时间越久。
  2. 从准确率来看,准确率越高,可能会有相反的效果,并不是隐藏层数越多,隐藏神经元个数越多。更多的隐藏层和隐藏神经元个数,可能会导致模型的过拟合现象,导致在训练集上准确率很高,但在测试集上准确率很低。

这篇关于深度学习本科课程 实验2 前馈神经网络的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/682484

相关文章

Java深度学习库DJL实现Python的NumPy方式

《Java深度学习库DJL实现Python的NumPy方式》本文介绍了DJL库的背景和基本功能,包括NDArray的创建、数学运算、数据获取和设置等,同时,还展示了如何使用NDArray进行数据预处理... 目录1 NDArray 的背景介绍1.1 架构2 JavaDJL使用2.1 安装DJL2.2 基本操

最长公共子序列问题的深度分析与Java实现方式

《最长公共子序列问题的深度分析与Java实现方式》本文详细介绍了最长公共子序列(LCS)问题,包括其概念、暴力解法、动态规划解法,并提供了Java代码实现,暴力解法虽然简单,但在大数据处理中效率较低,... 目录最长公共子序列问题概述问题理解与示例分析暴力解法思路与示例代码动态规划解法DP 表的构建与意义动

Go中sync.Once源码的深度讲解

《Go中sync.Once源码的深度讲解》sync.Once是Go语言标准库中的一个同步原语,用于确保某个操作只执行一次,本文将从源码出发为大家详细介绍一下sync.Once的具体使用,x希望对大家有... 目录概念简单示例源码解读总结概念sync.Once是Go语言标准库中的一个同步原语,用于确保某个操

五大特性引领创新! 深度操作系统 deepin 25 Preview预览版发布

《五大特性引领创新!深度操作系统deepin25Preview预览版发布》今日,深度操作系统正式推出deepin25Preview版本,该版本集成了五大核心特性:磐石系统、全新DDE、Tr... 深度操作系统今日发布了 deepin 25 Preview,新版本囊括五大特性:磐石系统、全新 DDE、Tree

Node.js 中 http 模块的深度剖析与实战应用小结

《Node.js中http模块的深度剖析与实战应用小结》本文详细介绍了Node.js中的http模块,从创建HTTP服务器、处理请求与响应,到获取请求参数,每个环节都通过代码示例进行解析,旨在帮... 目录Node.js 中 http 模块的深度剖析与实战应用一、引言二、创建 HTTP 服务器:基石搭建(一

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

学习hash总结

2014/1/29/   最近刚开始学hash,名字很陌生,但是hash的思想却很熟悉,以前早就做过此类的题,但是不知道这就是hash思想而已,说白了hash就是一个映射,往往灵活利用数组的下标来实现算法,hash的作用:1、判重;2、统计次数;

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]