MMSegmentation改进:增加Kappa系数评价指数

2024-06-12 00:12

本文主要是介绍MMSegmentation改进:增加Kappa系数评价指数,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

将mmseg\evaluation\metrics\iou_metric.py文件中的内容替换成以下内容即可:

支持输出单类Kappa系数和平均Kappa系数。

使用方法:将dataset的config文件中:val_evaluator 添加'mKappa',如 val_evaluator = dict(type='mmseg.IoUMetric', iou_metrics=['mFscore', 'mIoU', 'mKappa'])。

欢迎关注大地主的CSDN与 ABCnutter (github.com),敬请期待更多精彩内容

# Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
from collections import OrderedDict
from typing import Dict, List, Optional, Sequenceimport numpy as np
import torch
from mmengine.dist import is_main_process
from mmengine.evaluator import BaseMetric
from mmengine.logging import MMLogger, print_log
from mmengine.utils import mkdir_or_exist
from PIL import Image
from prettytable import PrettyTablefrom mmseg.registry import METRICS@METRICS.register_module()
class IoUMetric(BaseMetric):"""IoU evaluation metric.Args:ignore_index (int): Index that will be ignored in evaluation.Default: 255.iou_metrics (list[str] | str): Metrics to be calculated, the optionsinclude 'mIoU', 'mDice', 'mFscore', and 'Kappa'.nan_to_num (int, optional): If specified, NaN values will be replacedby the numbers defined by the user. Default: None.beta (int): Determines the weight of recall in the combined score.Default: 1.collect_device (str): Device name used for collecting results fromdifferent ranks during distributed training. Must be 'cpu' or'gpu'. Defaults to 'cpu'.output_dir (str): The directory for output prediction. Defaults toNone.format_only (bool): Only format result for results commit withoutperform evaluation. It is useful when you want to save the resultto a specific format and submit it to the test server.Defaults to False.prefix (str, optional): The prefix that will be added in the metricnames to disambiguate homonymous metrics of different evaluators.If prefix is not provided in the argument, self.default_prefixwill be used instead. Defaults to None."""def __init__(self,ignore_index: int = 255,iou_metrics: List[str] = ['mIoU'],nan_to_num: Optional[int] = None,beta: int = 1,collect_device: str = 'cpu',output_dir: Optional[str] = None,format_only: bool = False,prefix: Optional[str] = None,**kwargs) -> None:super().__init__(collect_device=collect_device, prefix=prefix)self.ignore_index = ignore_indexself.metrics = iou_metricsself.nan_to_num = nan_to_numself.beta = betaself.output_dir = output_dirif self.output_dir and is_main_process():mkdir_or_exist(self.output_dir)self.format_only = format_onlydef process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:"""Process one batch of data and data_samples.The processed results should be stored in ``self.results``, which willbe used to compute the metrics when all batches have been processed.Args:data_batch (dict): A batch of data from the dataloader.data_samples (Sequence[dict]): A batch of outputs from the model."""num_classes = len(self.dataset_meta['classes'])for data_sample in data_samples:pred_label = data_sample['pred_sem_seg']['data'].squeeze()# format_only always for test dataset without ground truthif not self.format_only:label = data_sample['gt_sem_seg']['data'].squeeze().to(pred_label)self.results.append(self.intersect_and_union(pred_label, label, num_classes,self.ignore_index))# format_resultif self.output_dir is not None:basename = osp.splitext(osp.basename(data_sample['img_path']))[0]png_filename = osp.abspath(osp.join(self.output_dir, f'{basename}.png'))output_mask = pred_label.cpu().numpy()# The index range of official ADE20k dataset is from 0 to 150.# But the index range of output is from 0 to 149.# That is because we set reduce_zero_label=True.if data_sample.get('reduce_zero_label', False):output_mask = output_mask + 1output = Image.fromarray(output_mask.astype(np.uint8))output.save(png_filename)def compute_metrics(self, results: list) -> Dict[str, float]:"""Compute the metrics from processed results.Args:results (list): The processed results of each batch.Returns:Dict[str, float]: The computed metrics. The keys are the names ofthe metrics, and the values are corresponding results. The keymainly includes aAcc, mIoU, mAcc, mDice, mFscore, mPrecision,mRecall, and Kappa."""logger: MMLogger = MMLogger.get_current_instance()if self.format_only:logger.info(f'results are saved to {osp.dirname(self.output_dir)}')return OrderedDict()# convert list of tuples to tuple of lists, e.g.# [(A_1, B_1, C_1, D_1), ...,  (A_n, B_n, C_n, D_n)] to# ([A_1, ..., A_n], ..., [D_1, ..., D_n])results = tuple(zip(*results))assert len(results) == 4total_area_intersect = sum(results[0])total_area_union = sum(results[1])total_area_pred_label = sum(results[2])total_area_label = sum(results[3])ret_metrics = self.total_area_to_metrics(total_area_intersect, total_area_union, total_area_pred_label,total_area_label, self.metrics, self.nan_to_num, self.beta)class_names = self.dataset_meta['classes']# summary tableret_metrics_summary = OrderedDict({ret_metric: np.round(np.nanmean(ret_metric_value) * 100, 2)for ret_metric, ret_metric_value in ret_metrics.items()})metrics = dict()for key, val in ret_metrics_summary.items():if key == 'aAcc':metrics[key] = valelse:metrics['m' + key] = val# each class tableret_metrics.pop('aAcc', None)# ret_metrics.pop('Kappa', None)ret_metrics_class = OrderedDict({ret_metric: np.round(ret_metric_value * 100, 2)for ret_metric, ret_metric_value in ret_metrics.items()})ret_metrics_class.update({'Class': class_names})ret_metrics_class.move_to_end('Class', last=False)class_table_data = PrettyTable()for key, val in ret_metrics_class.items():class_table_data.add_column(key, val)print_log('per class results:', logger)print_log('\n' + class_table_data.get_string(), logger=logger)return metrics@staticmethoddef intersect_and_union(pred_label: torch.tensor, label: torch.tensor,num_classes: int, ignore_index: int):"""Calculate Intersection and Union.Args:pred_label (torch.tensor): Prediction segmentation mapor predict result filename. The shape is (H, W).label (torch.tensor): Ground truth segmentation mapor label filename. The shape is (H, W).num_classes (int): Number of categories.ignore_index (int): Index that will be ignored in evaluation.Returns:torch.Tensor: The intersection of prediction and ground truthhistogram on all classes.torch.Tensor: The union of prediction and ground truth histogram onall classes.torch.Tensor: The prediction histogram on all classes.torch.Tensor: The ground truth histogram on all classes."""mask = (label != ignore_index)pred_label = pred_label[mask]label = label[mask]intersect = pred_label[pred_label == label]area_intersect = torch.histc(intersect.float(), bins=(num_classes), min=0,max=num_classes - 1).cpu()area_pred_label = torch.histc(pred_label.float(), bins=(num_classes), min=0,max=num_classes - 1).cpu()area_label = torch.histc(label.float(), bins=(num_classes), min=0,max=num_classes - 1).cpu()area_union = area_pred_label + area_label - area_intersectreturn area_intersect, area_union, area_pred_label, area_label@staticmethoddef total_area_to_metrics(total_area_intersect: np.ndarray,total_area_union: np.ndarray,total_area_pred_label: np.ndarray,total_area_label: np.ndarray,metrics: List[str] = ['mIoU'],nan_to_num: Optional[int] = None,beta: int = 1):"""Calculate evaluation metricsArgs:total_area_intersect (np.ndarray): The intersection of predictionand ground truth histogram on all classes.total_area_union (np.ndarray): The union of prediction and groundtruth histogram on all classes.total_area_pred_label (np.ndarray): The prediction histogram onall classes.total_area_label (np.ndarray): The ground truth histogram onall classes.metrics (List[str] | str): Metrics to be evaluated, 'mIoU', 'mDice','mFscore', and 'Kappa'.nan_to_num (int, optional): If specified, NaN values will bereplaced by the numbers defined by the user. Default: None.beta (int): Determines the weight of recall in the combined score.Default: 1.Returns:Dict[str, np.ndarray]: per category evaluation metrics,shape (num_classes, )."""def f_score(precision, recall, beta=1):"""calculate the f-score value.Args:precision (float | torch.Tensor): The precision value.recall (float | torch.Tensor): The recall value.beta (int): Determines the weight of recall in the combinedscore. Default: 1.Returns:[torch.tensor]: The f-score value."""score = (1 + beta**2) * (precision * recall) / ((beta**2 * precision) + recall)return scoreif isinstance(metrics, str):metrics = [metrics]allowed_metrics = ['mIoU', 'mDice', 'mFscore', 'mKappa']if not set(metrics).issubset(set(allowed_metrics)):raise KeyError(f'metrics {metrics} is not supported')all_acc = total_area_intersect.sum() / total_area_label.sum()ret_metrics = OrderedDict({'aAcc': all_acc})for metric in metrics:if metric == 'mIoU':iou = total_area_intersect / total_area_unionacc = total_area_intersect / total_area_labelret_metrics['IoU'] = iouret_metrics['Acc'] = accelif metric == 'mDice':dice = 2 * total_area_intersect / (total_area_pred_label + total_area_label)acc = total_area_intersect / total_area_labelret_metrics['Dice'] = diceret_metrics['Acc'] = accelif metric == 'mFscore':precision = total_area_intersect / total_area_pred_labelrecall = total_area_intersect / total_area_labelf_value = torch.tensor([f_score(x[0], x[1], beta) for x in zip(precision, recall)])ret_metrics['Fscore'] = f_valueret_metrics['Precision'] = precisionret_metrics['Recall'] = recallelif metric == 'mKappa':total = total_area_label.sum()po = total_area_intersect / total_area_labelpe = (total_area_pred_label * total_area_label) / (total ** 2)kappa = (po - pe) / (1 - pe)ret_metrics['Kappa'] = kapparet_metrics = {metric: value.numpy() if isinstance(value, torch.Tensor) else valuefor metric, value in ret_metrics.items()}if nan_to_num is not None:ret_metrics = OrderedDict({metric: np.nan_to_num(metric_value, nan=nan_to_num)for metric, metric_value in ret_metrics.items()})return ret_metrics

这篇关于MMSegmentation改进:增加Kappa系数评价指数的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1052644

相关文章

EMLOG程序单页友链和标签增加美化

单页友联效果图: 标签页面效果图: 源码介绍 EMLOG单页友情链接和TAG标签,友链单页文件代码main{width: 58%;是设置宽度 自己把设置成与您的网站宽度一样,如果自适应就填写100%,TAG文件不用修改 安装方法:把Links.php和tag.php上传到网站根目录即可,访问 域名/Links.php、域名/tag.php 所有模板适用,代码就不粘贴出来,已经打

一种改进的red5集群方案的应用、基于Red5服务器集群负载均衡调度算法研究

转自: 一种改进的red5集群方案的应用: http://wenku.baidu.com/link?url=jYQ1wNwHVBqJ-5XCYq0PRligp6Y5q6BYXyISUsF56My8DP8dc9CZ4pZvpPz1abxJn8fojMrL0IyfmMHStpvkotqC1RWlRMGnzVL1X4IPOa_  基于Red5服务器集群负载均衡调度算法研究 http://ww

一些数学经验总结——关于将原一元二次函数增加一些限制条件后最优结果的对比(主要针对公平关切相关的建模)

1.没有分段的情况 原函数为一元二次凹函数(开口向下),如下: 因为要使得其存在正解,必须满足,那么。 上述函数的最优结果为:,。 对应的mathematica代码如下: Clear["Global`*"]f0[x_, a_, b_, c_, d_] := (a*x - b)*(d - c*x);(*(b c+a d)/(2 a c)*)Maximize[{f0[x, a, b,

Matlab/Simulink中PMSM模型的反电动势系数和转矩系数

Matlab/Simulink中PMSM模型的反电动势系数和转矩系数_matlab pmsm-CSDN博客

YOLOv8改进实战 | 注意力篇 | 引入CVPR2024 PKINet 上下文锚点注意力CAAttention

YOLOv8专栏导航:点击此处跳转 前言 YOLOv8 是由 YOLOv5 的发布者 Ultralytics 发布的最新版本的 YOLO。它可用于对象检测、分割、分类任务以及大型数据集的学习,并且可以在包括 CPU 和 GPU 在内的各种硬件上执行。 YOLOv8 是一种尖端的、最先进的 (SOTA) 模型,它建立在以前成功的 YOLO 版本的基础上,并引入了新的功能和改进,以

YOLOv8改进 | Conv篇 | YOLOv8引入DWR

1. DWR介绍 1.1  摘要:当前的许多工作直接采用多速率深度扩张卷积从一个输入特征图中同时捕获多尺度上下文信息,从而提高实时语义分割的特征提取效率。 然而,这种设计可能会因为结构和超参数的不合理而导致多尺度上下文信息的访问困难。 为了降低多尺度上下文信息的绘制难度,我们提出了一种高效的多尺度特征提取方法,将原始的单步方法分解为区域残差-语义残差两个步骤。 在该方法中,多速率深度扩张卷积

黑神话:悟空》增加草地绘制距离MOD使游戏场景看起来更加广阔与自然,增强了游戏的沉浸式体验

《黑神话:悟空》增加草地绘制距离MOD为玩家提供了一种全新的视觉体验,通过扩展游戏中草地的绘制距离,增加了场景的深度和真实感。该MOD通过增加草地的绘制距离,使游戏场景看起来更加广阔与自然,增强了游戏的沉浸式体验。 增加草地绘制距离MOD安装 1、在%userprofile%AppDataLocalb1SavedConfigWindows目录下找到Engine.ini文件。 2、使用记事本编辑

STM32F103调试DMA+PWM 实现占空比逐渐增加的软启效果

实现效果:DMA+PWM 实现PWM输出时,从低电平到输出占空比逐渐增加再到保持高电平的效果,达到控制 MOS 功率开关软启的效果。 1.配置时钟 2.TIM 的 PWM 功能配置 选择、配置 TIM 注意:选择 TIM 支持 DMA 控制输出 PWM 功能的通道,有的TIM通道支持PWM 但不支持PWM注意选择。 PWM参数设置 Counter Period :

【python 百度指数抓取】python 模拟登陆百度指数,图像识别百度指数

一、算法思想 目的奔着去抓取百度指数的搜索指数,搜索指数的爬虫不像是其他爬虫,难度系数很高,分析之后发现是图片,坑爹的狠,想了下,由于之前做过身份证号码识别,验证码识别之类,豁然开朗,不就是图像识别麽,图像识别我不怕你,于是就有了思路,果然有异曲同工之妙,最后成功被我攻破了,大致思路如下: 1、首先得模拟登陆百度账号(用selenium+PhantomJS模拟登陆百度,获取cookie) 2

第一篇 第一章资金时间价值计算及应用 第二章经济效果评价

第1章 资金时间价值计算及应用 资金具有时间价值 1.1 利息的计算 1.1.1 利息和利率 I=F-P 债务人为资金需求方 债权人为资金供给方利息对经济活动的影响(1.影响企业行为 2.影响居民资产选择行为 3.影响政府行为) 利率 1.影响因素(1.社会平均利润率的高低 2.市场资金供求对比状况 3.资金要承担的风险 4.债务资金使用期限长短 5.政府宏观调控政策 6.经济周期所处