本文主要是介绍python计算precision@k、recall@k和f1_score@k,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
sklearn.metrics中的评估函数只能对同一样本的单个预测结果进行评估,如下所示:
from sklearn.metrics import classification_reporty_true = [0, 5, 0, 3, 4, 2, 1, 1, 5, 4]
y_pred = [0, 2, 4, 5, 2, 3, 1, 1, 4, 2]print(classification_report(y_true, y_pred))
而我们经常会遇到需要对同一样本的top-k个预测结果进行评估的情况,此时算法针对单个样本的预测结果是一个按可能性排序的列表,如下所示:
y_true = [0, 5, 0, 3, 4, 2, 1, 1, 5, 4]
y_pred = [[0, 0, 2, 1, 5],[2, 2, 4, 1, 4],[4, 5, 1, 3, 5],[5, 4, 2, 4, 3],[2, 0, 0, 2, 3],[3, 3, 4, 1, 4],[1, 1, 0, 1, 2],[1, 4, 4, 2, 4],[4, 1, 3, 3, 5],[2, 4, 2, 2, 3]]
针对以上这种情况,我们要如何评估算法的好坏呢?我们需要precision@k、recall@k和f1_score@k等指标,下面给出计算这些指标的函数及示例。
from _tkinter import _flatten# 统计所有的类别
def get_unique_labels(y_true, y_pred):y_true_set = set(y_true)y_pred_set = set(_flatten(y_pred))unique_label_set = y_true_set | y_pred_setunique_label = list(unique_label_set)return unique_label# y_true: 1d-list-like
# y_pred: 2d-list-like
# k:针对top-k各结果进行计算(k <= y_pred.shape[1])
def precision_recall_fscore_k(y_trues, y_preds, k=3, digs=2):# 取每个样本的top-k个预测结果!y_preds = [pred[:k] for pred in y_preds]unique_labels = get_unique_labels(y_trues, y_preds)num_classes = len(unique_labels)# 计算每个类别的precision、recall、f1-score、supportresults_dict = {}results = ''for label in unique_labels:current_label_result = []# TP + FNtp_fn = y_trues.count(label)# TP + FPtp_fp = 0for y_pred in y_preds:if label in y_pred:tp_fp += 1# TPtp = 0for i in range(len(y_trues)):if y_trues[i] == label and label in y_preds[i]:tp += 1support = tp_fntry:precision = round(tp/tp_fp, digs)recall = round(tp/tp_fn, digs)f1_score = round(2*(precision * recall) / (precision + recall), digs)except ZeroDivisionError:precision = 0recall = 0f1_score = 0current_label_result.append(precision)current_label_result.append(recall)current_label_result.append(f1_score)current_label_result.append(support)# 输出第一行results_dict[str(label)] = current_label_resulttitle = '\t' + 'precision@' + str(k) + '\t' + 'recall@' + str(k) + '\t' + 'f1_score@' + str(k) + '\t' + 'support' + '\n'results += titlefor k, v in sorted(results_dict.items()):current_line = str(k) + '\t' + str(v[0]) + '\t' + str(v[1]) + '\t' + str(v[2]) + '\t' + str(v[3]) + '\n'results += current_linesums = len(y_trues)# 注意macro avg和weighted avg计算方式的不同macro_avg_results = [(v[0], v[1], v[2]) for k, v in sorted(results_dict.items())]weighted_avg_results = [(v[0]*v[3], v[1]*v[3], v[2]*v[3]) for k, v in sorted(results_dict.items())]# 计算macro avgmacro_precision = 0macro_recall = 0macro_f1_score = 0for macro_avg_result in macro_avg_results:macro_precision += macro_avg_result[0]macro_recall += macro_avg_result[1]macro_f1_score += macro_avg_result[2]macro_precision /= num_classesmacro_recall /= num_classesmacro_f1_score /= num_classes# 计算weighted avgweighted_precision = 0weighted_recall = 0weighted_f1_score = 0for weighted_avg_result in weighted_avg_results:weighted_precision += weighted_avg_result[0]weighted_recall += weighted_avg_result[1]weighted_f1_score += weighted_avg_result[2]weighted_precision /= sumsweighted_recall /= sumsweighted_f1_score /= sumsmacro_avg_line = 'macro avg' + '\t' + str(round(macro_precision, digs)) + '\t' + str(round(macro_recall, digs)) + '\t' + str(round(macro_f1_score, digs)) + '\t' + str(sums) +'\n'weighted_avg_line = 'weighted avg' + '\t' + str(round(weighted_precision, digs)) + '\t' + str(round(weighted_recall, digs)) + '\t' + str(round(weighted_f1_score, digs)) + '\t' + str(sums)results += macro_avg_lineresults += weighted_avg_linereturn resultsif __name__ == '__main__':y_true = [0, 5, 0, 3, 4, 2, 1, 1, 5, 4]y_pred = [[0, 3, 2, 1, 5],[2, 0, 4, 1, 3],[4, 5, 1, 3, 0],[5, 4, 2, 0, 3],[2, 0, 1, 3, 5],[3, 0, 4, 1, 2],[1, 0, 4, 2, 3],[1, 4, 5, 2, 3],[4, 1, 3, 2, 0],[2, 0, 1, 3, 4]]res = precision_recall_fscore_k(y_true, y_pred, k=5, digs=2)print(res)
我们分别取k=1、k=2、k=3、k=4和k=5,看一下效果。
k=1时:
k=3时:
k=5时:
我们进一步看一下随着k值的增大,precision@k、recall@k和f1_score@k值的变化:
写作过程参考了
https://blog.csdn.net/dipizhong7224/article/details/104579159
https://blog.csdn.net/ybdesire/article/details/96507733
这篇关于python计算precision@k、recall@k和f1_score@k的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!