基于遗传算法特征选择及单层感知机模型的IMDB电影评论文本分类案例

本文主要是介绍基于遗传算法特征选择及单层感知机模型的IMDB电影评论文本分类案例,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

基于遗传算法特征选择及单层感知机模型的IMDB电影评论文本分类案例

  • 1.数据载入及处理
  • 2.感知机模型建立
  • 3.模型训练
  • 4.遗传算法进行特征选择
    • 注意
  • 5.联系我们

1.数据载入及处理

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
from keras.datasets import imdb
from keras.preprocessing import sequence
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as pltmax_features = 10000
maxlen = 200
batch_size = 32# 加载IMDB数据集
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')# 限定评论长度,并进行填充
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)[:2000]
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)[:2000]
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)# 将整数序列转换为文本
word_index = imdb.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in input_train[0]])# 使用词袋模型表示文本
vectorizer = CountVectorizer(max_features=max_features)
X_train = vectorizer.fit_transform([' '.join([reverse_word_index.get(i - 3, '?') for i in sequence]) for sequence in input_train])
X_test = vectorizer.transform([' '.join([reverse_word_index.get(i - 3, '?') for i in sequence]) for sequence in input_test])# 转换数据为PyTorch张量
X_train_tensor = torch.tensor(X_train.toarray(), dtype=torch.float32)
y_train_tensor = torch.tensor(y_train, dtype=torch.float32)
X_test_tensor = torch.tensor(X_test.toarray(), dtype=torch.float32)
y_test_tensor = torch.tensor(y_test, dtype=torch.float32)batch_size = 2000
train_iter = DataLoader(TensorDataset(X_train_tensor, y_train_tensor), batch_size)
test_iter = DataLoader(TensorDataset(X_test_tensor, y_test_tensor), batch_size)

2.感知机模型建立

# 定义感知机网络
class Perceptron(nn.Module):def __init__(self, input_size):super(Perceptron, self).__init__()self.fc = nn.Linear(input_size, 1)self.sigmoid = nn.Sigmoid()def forward(self, x):x = self.fc(x)x = self.sigmoid(x)return x# 训练感知机模型
def train(model, iterator, optimizer, criterion):model.train()for batch in iterator:optimizer.zero_grad()text, label = batchpredictions = model(text).squeeze(1)loss = criterion(predictions, label)loss.backward()optimizer.step()# 测试感知机模型
def evaluate(model, iterator, criterion):model.eval()total_loss = 0total_correct = 0with torch.no_grad():for batch in iterator:text, label = batchpredictions = model(text).squeeze(1)loss = criterion(predictions, label)total_loss += loss.item()rounded_preds = torch.round(predictions)total_correct += (rounded_preds == label).sum().item()return total_loss / len(iterator), total_correct / len(iterator.dataset)# 初始化感知机模型
input_size = X_train_tensor.shape[1]
model = Perceptron(input_size)

3.模型训练

# # 定义损失函数和优化器
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)N_EPOCHS = 10
eval_acc_list = []
for epoch in range(N_EPOCHS):train(model, train_iter, optimizer, criterion)eval_loss, eval_acc = evaluate(model, test_iter, criterion)eval_acc_list.append(eval_acc)print(f'Epoch: {epoch+1}, Test Loss: {eval_loss:.3f}, Test Acc: {eval_acc*100:.2f}%')plt.plot(range(N_EPOCHS), eval_acc_list)
plt.title('Test Accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()

在这里插入图片描述

4.遗传算法进行特征选择

# 随机初始化染色体
def initialize_population(population_size, num_genes):# # Option 1:# p=np.array([0.05,0.95])# return np.random.choice([0, 1], size=(population_size, num_genes), p=p.ravel())# Option 2:return np.random.choice([0, 1], size=(population_size, num_genes))# 计算适应值,以分类器的准确度
def calculate_fitness(population, model, criterion):fitness = []for chromosome in population: # population: a 0-1 sequence selected_features = np.where(chromosome == 1)[0] # 更新模型输入维度input_dim = len(selected_features)model.fc = nn.Linear(input_dim, 1)optimizer = optim.Adam(model.parameters(), lr=0.001)idx = torch.tensor(selected_features)        train_iter = DataLoader(TensorDataset(X_train_tensor[:, idx], y_train_tensor), batch_size)test_iter = DataLoader(TensorDataset(X_test_tensor[:, idx], y_test_tensor), batch_size)# 训练并获取准确度N_EPOCHS = 10for epoch in range(N_EPOCHS):train(model, train_iter, optimizer, criterion)test_loss, test_acc = evaluate(model, test_iter, criterion)model.train() fitness.append(test_acc)return np.array(fitness)# 选择
def selection(population, fitness): # input populations and their accuracyprobabilities = fitness / sum(fitness) # the accuracy-based probability of selection# # Option 1: no random in selection, choose the top 2 as parents# probabilities_copy = probabilities.copy()# probabilities_copy.sort()# max_1 = probabilities_copy[-1]# max_2 = probabilities_copy[-2]# max_1_index = np.where(probabilities == max_1)# max_2_index = np.where(probabilities == max_2)# selected_indices = [max_1_index[0].tolist()[0], max_2_index[0].tolist()[0]] * 25# Option 2: random selected_indices = np.random.choice(range(len(population)), size=len(population), p=probabilities)return population[selected_indices]# 交叉
def crossover(parents, crossover_rate):children = []for i in range(0, len(parents), 2):parent1, parent2 = parents[i], parents[i + 1]if np.random.rand() < crossover_rate:crossover_point = np.random.randint(1, len(parent1))child1 = np.concatenate((parent1[:crossover_point], parent2[crossover_point:]))child2 = np.concatenate((parent2[:crossover_point], parent1[crossover_point:]))else:child1, child2 = parent1, parent2children.extend([child1, child2])return np.array(children)# 变异
def mutation(children, mutation_rate):for i in range(len(children)):mutation_points = np.where(np.random.rand(len(children[i])) < mutation_rate)[0]children[i][mutation_points] = 1 - children[i][mutation_points]  # keyreturn children# 定义遗传算法的主函数
def genetic_algorithm(population_size, num_genes, generations, crossover_rate, mutation_rate, model, criterion):# 初始化染色体population = initialize_population(population_size, num_genes)fitness_list = []for generation in range(generations):print('Generation', generation+1, ":")fitness = calculate_fitness(population, model, criterion) # return a list (1, population_size) with history test acc# 选择selected_population = selection(population, fitness) # return a list, (population_size, num_genes / input_size / sentence_length), each adjacent are parents# 交叉children = crossover(selected_population, crossover_rate)# 变异mutated_children = mutation(children, mutation_rate)# 形成新种群population = mutated_children# 输出当前最优解best_individual = population[np.argmax(fitness)]fitness_list.append(fitness.max())print(f"Generation {generation + 1}, Best Individual: {best_individual}, Fitness: {fitness.max()}")plt.plot(range(generations), fitness_list)plt.title('Test Accuracy with feature selection via genetic algorithm')plt.xlabel('epoch')plt.ylabel('accuracy')plt.show()# 返回最优解best_individual = population[np.argmax(fitness)]return best_individual# 调用遗传算法
model = Perceptron(input_size)
best_solution = genetic_algorithm(population_size=50, num_genes=input_size, generations=10, crossover_rate=0.8, mutation_rate=0.1, model=model, criterion=criterion)
print(f"Final Best Solution: {best_solution}")# 解释最优解
selected_features = np.where(best_solution == 1)[0]
print(f"Selected Features: {selected_features}")
print("Shape of Selected Features = ",selected_features.shape)

在这里插入图片描述

注意

  1. 在本任务中,selection函数中第一个option 1仅选择效果最好的两个染色体作为父母比option 2在population中随机选择的效率更高(10轮次后,验证集精度74%>71%);
  2. 在本任务中,初始化initialize_population函数中指定选择更多的特征(95%, Option 1)比随机选择特征(50%, Option 2)的效率更高;
  3. 每一次基于筛选输入特征的维度修改模型结构参数后,需要注意重申一下 optimizer变量,因为optimizer的声明中涉及model.parameters()

5.联系我们

Email: oceannedlg@outlook.com
在这里插入图片描述

这篇关于基于遗传算法特征选择及单层感知机模型的IMDB电影评论文本分类案例的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/529850

相关文章

Spring Security基于数据库的ABAC属性权限模型实战开发教程

《SpringSecurity基于数据库的ABAC属性权限模型实战开发教程》:本文主要介绍SpringSecurity基于数据库的ABAC属性权限模型实战开发教程,本文给大家介绍的非常详细,对大... 目录1. 前言2. 权限决策依据RBACABAC综合对比3. 数据库表结构说明4. 实战开始5. MyBA

springboot循环依赖问题案例代码及解决办法

《springboot循环依赖问题案例代码及解决办法》在SpringBoot中,如果两个或多个Bean之间存在循环依赖(即BeanA依赖BeanB,而BeanB又依赖BeanA),会导致Spring的... 目录1. 什么是循环依赖?2. 循环依赖的场景案例3. 解决循环依赖的常见方法方法 1:使用 @La

Java的IO模型、Netty原理解析

《Java的IO模型、Netty原理解析》Java的I/O是以流的方式进行数据输入输出的,Java的类库涉及很多领域的IO内容:标准的输入输出,文件的操作、网络上的数据传输流、字符串流、对象流等,这篇... 目录1.什么是IO2.同步与异步、阻塞与非阻塞3.三种IO模型BIO(blocking I/O)NI

基于Flask框架添加多个AI模型的API并进行交互

《基于Flask框架添加多个AI模型的API并进行交互》:本文主要介绍如何基于Flask框架开发AI模型API管理系统,允许用户添加、删除不同AI模型的API密钥,感兴趣的可以了解下... 目录1. 概述2. 后端代码说明2.1 依赖库导入2.2 应用初始化2.3 API 存储字典2.4 路由函数2.5 应

使用Python实现文本转语音(TTS)并播放音频

《使用Python实现文本转语音(TTS)并播放音频》在开发涉及语音交互或需要语音提示的应用时,文本转语音(TTS)技术是一个非常实用的工具,下面我们来看看如何使用gTTS和playsound库将文本... 目录什么是 gTTS 和 playsound安装依赖库实现步骤 1. 导入库2. 定义文本和语言 3

Python实现常用文本内容提取

《Python实现常用文本内容提取》在日常工作和学习中,我们经常需要从PDF、Word文档中提取文本,本文将介绍如何使用Python编写一个文本内容提取工具,有需要的小伙伴可以参考下... 目录一、引言二、文本内容提取的原理三、文本内容提取的设计四、文本内容提取的实现五、完整代码示例一、引言在日常工作和学

Java实现将Markdown转换为纯文本

《Java实现将Markdown转换为纯文本》这篇文章主要为大家详细介绍了两种在Java中实现Markdown转纯文本的主流方法,文中的示例代码讲解详细,大家可以根据需求选择适合的方案... 目录方法一:使用正则表达式(轻量级方案)方法二:使用 Flexmark-Java 库(专业方案)1. 添加依赖(Ma

MySQL中实现多表查询的操作方法(配sql+实操图+案例巩固 通俗易懂版)

《MySQL中实现多表查询的操作方法(配sql+实操图+案例巩固通俗易懂版)》本文主要讲解了MySQL中的多表查询,包括子查询、笛卡尔积、自连接、多表查询的实现方法以及多列子查询等,通过实际例子和操... 目录复合查询1. 回顾查询基本操作group by 分组having1. 显示部门号为10的部门名,员

C#集成DeepSeek模型实现AI私有化的流程步骤(本地部署与API调用教程)

《C#集成DeepSeek模型实现AI私有化的流程步骤(本地部署与API调用教程)》本文主要介绍了C#集成DeepSeek模型实现AI私有化的方法,包括搭建基础环境,如安装Ollama和下载DeepS... 目录前言搭建基础环境1、安装 Ollama2、下载 DeepSeek R1 模型客户端 ChatBo

SpringBoot快速接入OpenAI大模型的方法(JDK8)

《SpringBoot快速接入OpenAI大模型的方法(JDK8)》本文介绍了如何使用AI4J快速接入OpenAI大模型,并展示了如何实现流式与非流式的输出,以及对函数调用的使用,AI4J支持JDK8... 目录使用AI4J快速接入OpenAI大模型介绍AI4J-github快速使用创建SpringBoot