[分布外检测]Entropy Maximization and Meta Classification for Out-of-Distribution Detection...实现记录

本文主要是介绍[分布外检测]Entropy Maximization and Meta Classification for Out-of-Distribution Detection...实现记录,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Aomaly Segmentation 项目记录

该文档记录异常检测在自动驾驶语义分割场景中的应用

主要参考论文Entropy Maximization and Meta Classification for Out-of-Distribution Detection in Semantic Segmentation

摘要:

Deep neural networks (DNNs) for the semantic segmentation of images are usually trained to operate on a predefined closed set of object classes. This is in contrast to the “open world” setting where DNNs are envisioned to be deployed to. From a functional safety point of view, the ability to detect so-called “out-of-distribution” (OoD) samples, i.e., objects outside of a DNN’s semantic space, is crucial for many applications such as automated driving. We present a two-step procedure for OoD detection. Firstly, we utilize samples from the COCO dataset as OoD proxy(代替物) and introduce a second training objective to maximize the softmax entropy on these samples. Starting from pretrained semantic segmentation networks we re-train a number of DNNs on different in-distribution datasets and evaluate on completely disjoint OoD datasets. Secondly, we perform a transparent post-processing step to discard false positive OoD samples by so-called “meta classification”. To this end, we apply linear models to a set of hand-crafted metrics derived from the DNN’s softmax probabilities. Our method contributes to safer DNNs with more reliable overall system performance.

数据处理:

该项目中主要运用了COCO 和 Cityscapes两个数据集

COCO(OoD proxy)
pycocotools

索引需要用到的图片和annotations,并生成需要的mask

论文主要利用了COCO2017的segmentation数据集,处理数据集的过程利用coco的api:pycocotools.coco

官方文档如下:

The COCO API assists in loading, parsing, and visualizing annotations in COCO. The API supports multiple annotation formats (please see the data format page). For additional details see: CocoApi.m, coco.py, and CocoApi.lua for Matlab, Python, and Lua code, respectively, and also the Python API demo.

使用记录:

from pycocotools.coco import COCO as coco_tools

生成tools类对象, annotation_file是coco官方网站下载的数据集中annotation对应的json文件。

tools = coco_tools(annotation_file)
  • getCatIds(catNms)

获取类对应编号,在该方法中,需要构建COCO OoD proxy,因此需要把Cityscapes中包含的相同类的image去掉

exclude_classes = ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic light', 'stop sign']
exclude_cat_Ids = tools.getCatIds(catNms = exclude_classes)
# 返回list
# exclude_cat_Ids
# [1, 2, 3, 4, 6, 8, 10, 13]
  • getImgIds(catIds)

获取包含输入类的所有图片的编号

exclude_img_Ids = []
for cat_Id in exclude_cat_Ids:exclude_img_Ids += tools.getImgIds(catIds = cat_Id)
# 返回list
# [262145, 262146, 524291, 262148, 393223, 393224, 524297, 393227, 131084, 393230, 262161, 131089, 524311, 393241, ...]
  • loadImgs(imgid)

读取图片,返回dict

img = tools.loadImgs(img_Id)[0]
'''
'license':1
'file_name':'000000177284.jpg'
'coco_url':'http://images.cocodataset.org/train2017/000000177284.jpg'
'height':480
'width':640
'date_captured':'2013-11-18 02:58:15'
'flickr_url':'http://farm9.staticflickr.com/8036/8074156186_a7331cbd3b_z.jpg'
'id':177284
len():8
'''
  • getAnnIds(imgIds, iscrowd=None)

获取对应编号图片的annotation的编号

  • loadAnns(annids)

读取annotation

annotations = tools.loadAnns(ann_Ids)
'''
'segmentation':[[122.16, 330.27, 194.59, 225.41, 278.92, 195.14, 289.73, 172.43, 316.76, ...]]
'area':46713.55159999999
'iscrowd':0
'image_id':177284
'bbox':[122.16, 140.0, 370.81, 201.08]
'category_id':22
'id':582827
len():7
'''
  • annToMask(annoations)

从annotations读取mask

COCO dataset
class COCO(Dataset):train_id_in = 0train_id_out = 254min_image_size = 480def __init__(self, root, split="train", transform = None, shuffle = True,proxy_size = None)self.root = rootself.coco_year = list(filter(None, self.root.split("/")))[-1]self.split = split + self.coco_yearself.images = []self.targets = []self.transform = transformfor root, _, filenames in os.walk(os.path.join(self.root, "annotations", "ood_seg_" + self.split)):assert self.split in ['train' + self.coco_year, 'val' + self.coco_year]for filename in filenames:if os.path.splitext(filename)[-1] == '.png':self.targets.append(os.path.join(root, filename))self.images.append(os.path.join(self.root, self.split, filename.split(".")[0] + ".jpg"))if shuffle: # 打乱zipped = list(zip(self.images, self.targets))random.shuffle(zipped)self.images, self.targets = zip(*zipped)if proxy_size is not None: # COCO数据集只取一定量作为PROXYself.images = list(self.images[:int(proxy_size)])self.targets = list(self.targets[:int(proxy_size)])def __len__(self):return len(self.images)def __getitem__(self, i):image = Image.open(self.images[i]).convert('RGB')target = Image.open(self.targets[i]).convert('L')if self.transform is not None:image, target = self.transform(image, target)return image, targetdef __repr__(self):fmt_str = 'Number of COCO Images: %d\n' % len(self.images)return fmt_str.strip()
np.array(coco[0][1])
array([[  0,   0,   0, ...,   0,   0,   0],[  0,   0,   0, ...,   0,   0,   0],[  0,   0,   0, ...,   0,   0,   0],...,[254, 254, 254, ...,   0,   0,   0],[254, 254, 254, ...,   0,   0,   0],[  0,   0, 254, ...,   0,   0,   0]], dtype=uint8)

可以看到coco数据集经过处理后 target只包括0和254,0是没有mask的地方,254是有mask的地方

Cityscapes
Cityscapes dataset
class Cityscapes(Dataset):CityscapesClass = namedtuple('CityscapesClass', ['name', 'id', 'train_id', 'category', 'category_id','has_instances', 'ignore_in_eval', 'color'])labels = [#                 name                     id    trainId   category            catId     hasInstances   ignoreInEval   colorCityscapesClass(  'unlabeled'            ,  0 ,      255 , 'void'            , 0       , False        , True         , (  0,  0,  0) ),...]mean = (0.485, 0.456, 0.406)std = (0.229, 0.224, 0.225)ignore_in_eval_ids, label_ids, train_ids, train_id2id = [], [], [], []  # empty lists for storing idscolor_palette_train_ids = [(0, 0, 0) for i in range(256)]for i in range(len(labels)):if labels[i].ignore_in_eval and labels[i].train_id not in ignore_in_eval_ids:ignore_in_eval_ids.append(labels[i].train_id) # eval 不要的类别放进去for i in range(len(labels)):label_ids.append(labels[i].id)if labels[i].train_id not in ignore_in_eval_ids:train_ids.append(labels[i].train_id)color_palette_train_ids[labels[i].train_id] = labels[i].colortrain_id2id.append(labels[i].id)num_label_ids = len(set(label_ids)) # 所有的类num_train_ids = len(set(train_ids)) # eval需要用到的类id2label = {label.id: label for label in labels}train_id2label = {label.train_id: label for label in labels}def __init__(self, root = "/home/datasets/cityscapes/", split = "val", mode = "gtFine",target_type = "semantic_id", transform = None,predictions_root = None) -> None:self.root = rootself.split = splitself.mode = 'gtFine' if "fine" in mode.lower() else 'gtCoarse' # fine or coarseself.transform = transformself.images_dir = os.path.join(self.root, 'leftImg8bit', self.split)self.targets_dir = os.path.join(self.root, self.mode, self.split)self.predictions_dir = os.path.join(predictions_root, self.split) if predictions_root is not None else ""self.images = []self.targets = []self.predictions = []for city in os.listdir(self.images_dir):img_dir = os.path.join(self.images_dir, city)target_dir = os.path.join(self.targets_dir, city)pred_dir = os.path.join(self.predictions_dir, city)for file_name in os.listdir(img_dir):target_name = '{}_{}'.format(file_name.split('_leftImg8bit')[0],self._get_target_suffix(self.mode, target_type))self.images.append(os.path.join(img_dir, file_name))self.targets.append(os.path.join(target_dir, target_name))self.predictions.append(os.path.join(pred_dir, file_name.replace("_leftImg8bit", "")))def __getitem__(self, index):image = Image.open(self.images[index]).convert('RGB')if self.split in ['train', 'val']:target = Image.open(self.targets[index])else:target = Noneif self.transform is not None:image, target = self.transform(image, target)return image, targetdef __len__(self):return len(self.images)
Target encode:
def encode_target(target, pareto_alpha, num_classes, ignore_train_ind, ood_ind=254):"""encode target tensor with all hot encoding for OoD samples:param target: torch tensor:param pareto_alpha: OoD loss weight:param num_classes: number of classes in original task:param ignore_train_ind: void class in original task:param ood_ind: class label corresponding to OoD class:return: one/all hot encoded torch tensor"""npy = target.numpy()npz = npy.copy()npy[np.isin(npy, ood_ind)] = num_classes # 19npy[np.isin(npy, ignore_train_ind)] = num_classes + 1 # 20enc = np.eye(num_classes + 2)[npy][..., :-2]  # one hot encoding with last 2 axis cutoffenc[(npy == num_classes)] = np.full(num_classes, pareto_alpha / num_classes)  # set all hot encoded vectorenc[(enc == 1)] = 1 - pareto_alpha  # convex combination between in and out distribution samplesenc[np.isin(npz, ignore_train_ind)] = np.zeros(num_classes)enc = torch.from_numpy(enc)enc = enc.permute(0, 3, 1, 2).contiguous()return enc

这篇关于[分布外检测]Entropy Maximization and Meta Classification for Out-of-Distribution Detection...实现记录的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/743001

相关文章

使用Java解析JSON数据并提取特定字段的实现步骤(以提取mailNo为例)

《使用Java解析JSON数据并提取特定字段的实现步骤(以提取mailNo为例)》在现代软件开发中,处理JSON数据是一项非常常见的任务,无论是从API接口获取数据,还是将数据存储为JSON格式,解析... 目录1. 背景介绍1.1 jsON简介1.2 实际案例2. 准备工作2.1 环境搭建2.1.1 添加

Java实现任务管理器性能网络监控数据的方法详解

《Java实现任务管理器性能网络监控数据的方法详解》在现代操作系统中,任务管理器是一个非常重要的工具,用于监控和管理计算机的运行状态,包括CPU使用率、内存占用等,对于开发者和系统管理员来说,了解这些... 目录引言一、背景知识二、准备工作1. Maven依赖2. Gradle依赖三、代码实现四、代码详解五

java如何分布式锁实现和选型

《java如何分布式锁实现和选型》文章介绍了分布式锁的重要性以及在分布式系统中常见的问题和需求,它详细阐述了如何使用分布式锁来确保数据的一致性和系统的高可用性,文章还提供了基于数据库、Redis和Zo... 目录引言:分布式锁的重要性与分布式系统中的常见问题和需求分布式锁的重要性分布式系统中常见的问题和需求

SpringBoot基于MyBatis-Plus实现Lambda Query查询的示例代码

《SpringBoot基于MyBatis-Plus实现LambdaQuery查询的示例代码》MyBatis-Plus是MyBatis的增强工具,简化了数据库操作,并提高了开发效率,它提供了多种查询方... 目录引言基础环境配置依赖配置(Maven)application.yml 配置表结构设计demo_st

python使用watchdog实现文件资源监控

《python使用watchdog实现文件资源监控》watchdog支持跨平台文件资源监控,可以检测指定文件夹下文件及文件夹变动,下面我们来看看Python如何使用watchdog实现文件资源监控吧... python文件监控库watchdogs简介随着Python在各种应用领域中的广泛使用,其生态环境也

el-select下拉选择缓存的实现

《el-select下拉选择缓存的实现》本文主要介绍了在使用el-select实现下拉选择缓存时遇到的问题及解决方案,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的... 目录项目场景:问题描述解决方案:项目场景:从左侧列表中选取字段填入右侧下拉多选框,用户可以对右侧

Python pyinstaller实现图形化打包工具

《Pythonpyinstaller实现图形化打包工具》:本文主要介绍一个使用PythonPYQT5制作的关于pyinstaller打包工具,代替传统的cmd黑窗口模式打包页面,实现更快捷方便的... 目录1.简介2.运行效果3.相关源码1.简介一个使用python PYQT5制作的关于pyinstall

使用Python实现大文件切片上传及断点续传的方法

《使用Python实现大文件切片上传及断点续传的方法》本文介绍了使用Python实现大文件切片上传及断点续传的方法,包括功能模块划分(获取上传文件接口状态、临时文件夹状态信息、切片上传、切片合并)、整... 目录概要整体架构流程技术细节获取上传文件状态接口获取临时文件夹状态信息接口切片上传功能文件合并功能小

python实现自动登录12306自动抢票功能

《python实现自动登录12306自动抢票功能》随着互联网技术的发展,越来越多的人选择通过网络平台购票,特别是在中国,12306作为官方火车票预订平台,承担了巨大的访问量,对于热门线路或者节假日出行... 目录一、遇到的问题?二、改进三、进阶–展望总结一、遇到的问题?1.url-正确的表头:就是首先ur

C#实现文件读写到SQLite数据库

《C#实现文件读写到SQLite数据库》这篇文章主要为大家详细介绍了使用C#将文件读写到SQLite数据库的几种方法,文中的示例代码讲解详细,感兴趣的小伙伴可以参考一下... 目录1. 使用 BLOB 存储文件2. 存储文件路径3. 分块存储文件《文件读写到SQLite数据库China编程的方法》博客中,介绍了文