本文主要是介绍keras 实现reuters路透社新闻多分类,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
路透社reuters话题分类
来自路透社的11,228条新闻数据集标有46个主题。与IMDB数据集一样,每条线都被编码为一系列字索引。
reuters数据集无法下载,详见本篇博客提供下载和使用:
https://blog.csdn.net/sinat_41144773/article/details/89843688
代码实现
from keras.datasets import reuters
from keras.utils.np_utils import to_categorical
from keras import models
from keras.layers import LSTM
from keras.layers import Dense,Embedding
import numpy as np
import matplotlib.pyplot as plt
from keras.optimizers import Adam,RMSprop
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score,accuracy_score
# 获取数据
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=8000)# vectorized sequences
def vectorize_sequences(sequences, dimension=8000):results = np.zeros((len(sequences), dimension))for i, sequence in enumerate(sequences):results[i, sequence] = 1return resultsx_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)# using keras build-in methos to change to one-hot labels
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)# model setup
model = models.Sequential()model.add(Dense(64, activation='relu', input_shape=(8000,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(46, activation='softmax'))# model compile
model.summary()
model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])# validating our apporoach
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]# training the model
history = model.fit(partial_x_train, partial_y_train, epochs=10, batch_size=256, validation_data=(x_val, y_val))# ploting the training and validation loss
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'g', label='Validating loss')
plt.title('Training and Validating loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()# ploting the training and validation accuracy
# plt.clf()
# acc = history.history['acc']
# val_acc = history.history['val_acc']
# plt.plot(epochs, acc, 'ro', label='Training acc')
# plt.plot(epochs, val_acc, 'r', label='Validating acc')
# plt.title('Training and Validating accuracy')
# plt.xlabel('Epochs')
# plt.ylabel('accuracy')
# plt.legend()
# plt.show()# evaluate loss and accuracy
final_result = model.evaluate(x_test, one_hot_test_labels)
print(final_result)y_predict = model.predict(x_test, batch_size=512, verbose=1)
# y_predict = (y_predict > 0.007).astype(int)
y_predict = (y_predict > 0.01).astype(int)
y_true = np.reshape(one_hot_test_labels, [-1])
y_pred = np.reshape(y_predict, [-1])# 评价指标
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred, average='binary')
f1score = f1_score(y_true, y_pred, average='binary')
micro_f1 = f1_score(y_true, y_pred,average='micro')
macro_f1 = f1_score(y_true, y_pred,average='macro')print('accuracy:',accuracy)
print('precision:',precision)
print('recall:',recall)
print('f1score:',f1score)
print('Macro-F1: {}'.format(macro_f1))
print('Micro-F1: {}'.format(micro_f1))
评价指标+多分类F值:Macro-F1和Micro-F1
accuracy: 0.9427097448604282
precision: 0.26482264054296323
recall: 0.9207479964381122
f1score: 0.4113376429636996
Macro-F1: 0.6906136522606285
Micro-F1: 0.9427097448604282
损失函数loss图
结束。
这篇关于keras 实现reuters路透社新闻多分类的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!