RT-DETR+Sort 实现目标跟踪

2024-09-02 05:20
文章标签 实现目标 跟踪 rt sort detr

本文主要是介绍RT-DETR+Sort 实现目标跟踪,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

在前一篇博客中,博主介绍了利用YOLOv8Sort算法实现目标跟踪,在今天这篇博客中,博主将利用RT-DETR算法与Sort算法相结合,从而实现目标跟踪。。

这里博主依旧是采用ONNX格式的模型文件来执行推理过程,由于Sort算法是基于检测懂得目标跟踪方法,因此我们只需要获取到检测结果即可,代码如下:

import onnxruntime as ort
sess= ort.InferenceSession("detr.onnx", None)
output = sess.run(output_names=['labels', 'boxes', 'scores'],#output_names=None,input_feed={'images': im_data.data.numpy(), "orig_target_sizes": size.data.numpy()},)
cls,outbox,score=output

获得的检测结果如下,分别是预测类别,检测框的xywh以及其置信度

在这里插入图片描述

需要注意的是,DETR类目标检测算法作为一种端到端的目标检测方法,并不需要NMS等后处理过程,但它依旧需要对结果进行筛选,这里直接通过对置信度进行筛选即可

outbox=np.squeeze(outbox)
boxindex=np.where(score>thrh)#np.where方法可以返回符合条件的值的坐标
outbox=outbox[boxindex[1]]

在这里插入图片描述

随后的过程,便是与YOLO+Sort方法一致了。

关于Sort算法,我们缕清一个思路即可,即Sort算法是用来做跟踪的,在这个算法中的追踪器是卡尔曼滤波追踪器。

在这里插入图片描述

RT-DETR+Sort算法完整代码如下:

import cv2
import imageio
import numpy as np
from pathlib import Path
from filterpy.kalman import KalmanFilter
from scipy.optimize import linear_sum_assignmentfrom tasks.yolo_track.detr_sort import scale_boxdef linear_assignment(cost_matrix):x, y = linear_sum_assignment(cost_matrix)return np.array(list(zip(x, y)))def iou_batch(bb_test, bb_gt):"""From SORT: Computes IOU between two bboxes in the form [x1,y1,x2,y2]"""bb_gt = np.expand_dims(bb_gt, 0)bb_test = np.expand_dims(bb_test, 1)xx1 = np.maximum(bb_test[..., 0], bb_gt[..., 0])yy1 = np.maximum(bb_test[..., 1], bb_gt[..., 1])xx2 = np.minimum(bb_test[..., 2], bb_gt[..., 2])yy2 = np.minimum(bb_test[..., 3], bb_gt[..., 3])w = np.maximum(0., xx2 - xx1)h = np.maximum(0., yy2 - yy1)wh = w * ho = wh / ((bb_test[..., 2] - bb_test[..., 0]) * (bb_test[..., 3] - bb_test[..., 1])+ (bb_gt[..., 2] - bb_gt[..., 0]) * (bb_gt[..., 3] - bb_gt[..., 1]) - wh)return(o)def convert_bbox_to_z(bbox):"""Takes a bounding box in the form [x1,y1,x2,y2] and returns z in the form[x,y,s,r] where x,y is the centre of the box and s is the scale/area and r isthe aspect ratio"""w = bbox[2] - bbox[0]h = bbox[3] - bbox[1]x = bbox[0] + w/2.y = bbox[1] + h/2.s = w * h    #scale is just arear = w / float(h)return np.array([x, y, s, r]).reshape((4, 1))def convert_x_to_bbox(x):"""Takes a bounding box in the centre form [x,y,s,r] and returns it in the form[x1,y1,x2,y2] where x1,y1 is the top left and x2,y2 is the bottom right"""w = np.sqrt(x[2] * x[3])h = x[2] / wreturn np.array([x[0]-w/2.,x[1]-h/2.,x[0]+w/2.,x[1]+h/2.]).reshape((1,4))class KalmanBoxTracker(object):"""This class represents the internal state of individual tracked objects observed as bbox."""count = 0def __init__(self,bbox):"""Initialises a tracker using initial bounding box."""#define constant velocity modelself.kf = KalmanFilter(dim_x=7, dim_z=4)self.kf.F = np.array([[1,0,0,0,1,0,0],[0,1,0,0,0,1,0],[0,0,1,0,0,0,1],[0,0,0,1,0,0,0],[0,0,0,0,1,0,0],[0,0,0,0,0,1,0],[0,0,0,0,0,0,1]])self.kf.H = np.array([[1,0,0,0,0,0,0],[0,1,0,0,0,0,0],[0,0,1,0,0,0,0],[0,0,0,1,0,0,0]])self.kf.R[2:,2:] *= 10.self.kf.P[4:,4:] *= 1000. #give high uncertainty to the unobservable initial velocitiesself.kf.P *= 10.self.kf.Q[-1,-1] *= 0.01self.kf.Q[4:,4:] *= 0.01self.kf.x[:4] = convert_bbox_to_z(bbox)self.time_since_update = 0self.id = KalmanBoxTracker.countKalmanBoxTracker.count += 1self.history = []self.hits = 0self.hit_streak = 0self.age = 0def update(self,bbox):"""Updates the state vector with observed bbox."""self.time_since_update = 0self.history = []self.hits += 1self.hit_streak += 1self.kf.update(convert_bbox_to_z(bbox))def predict(self):"""Advances the state vector and returns the predicted bounding box estimate."""if(self.kf.x[6]+self.kf.x[2]<=0):self.kf.x[6] *= 0.0self.kf.predict()self.age += 1if(self.time_since_update>0):self.hit_streak = 0self.time_since_update += 1self.history.append(convert_x_to_bbox(self.kf.x))#记录历史坐标,这个坐标是预测的坐标return self.history[-1]#历史坐标的最后一个即当前的预测位置def get_state(self):"""Returns the current bounding box estimate."""return convert_x_to_bbox(self.kf.x)def associate_detections_to_tracks(detections,trackers,iou_threshold = 0.3):"""Assigns detections to tracked object (both represented as bounding boxes)Returns 3 lists of matches, unmatched_detections and unmatched_trackers"""if(len(trackers)==0):return np.empty((0,2),dtype=int), np.arange(len(detections)), np.empty((0,5),dtype=int)#计算两两间的交并比,调用linear_assignment进行匹配iou_matrix = iou_batch(detections, trackers)if min(iou_matrix.shape) > 0:a = (iou_matrix > iou_threshold).astype(np.int32)if a.sum(1).max() == 1 and a.sum(0).max() == 1:matched_indices = np.stack(np.where(a), axis=1)else:matched_indices = linear_assignment(-iou_matrix)else:matched_indices = np.empty(shape=(0,2))#记录未匹配的检测框及轨迹unmatched_detections = []for d, det in enumerate(detections):if(d not in matched_indices[:,0]):unmatched_detections.append(d)unmatched_trackers = []for t, trk in enumerate(trackers):if(t not in matched_indices[:,1]):unmatched_trackers.append(t)#过滤掉IoU低的匹配matches = []for m in matched_indices:if(iou_matrix[m[0], m[1]]<iou_threshold):unmatched_detections.append(m[0])unmatched_trackers.append(m[1])else:matches.append(m.reshape(1,2))if(len(matches)==0):matches = np.empty((0,2),dtype=int)else:matches = np.concatenate(matches,axis=0)return matches, np.array(unmatched_detections), np.array(unmatched_trackers)class Sort(object):def __init__(self, max_age=1, min_hits=3, iou_threshold=0.3):"""Sets key parameters for SORT"""self.max_age = max_age #连续预测的最大次数,就是放在self.trackers跟踪器列表中的框用卡尔曼滤波器连续预测位置的最大次数self.min_hits = min_hits #最小更新的次数,就是放在self.trackers跟踪器列表中的框与检测框匹配上,# 然后调用卡尔曼滤波器类中的update函数的最小次数,min_hits不设置为0是因为第一次检测到的目标不用跟踪,# 只需要加入到跟踪器列表中,不会显示,这个值不能设大,一般就是1,表示如果连续两帧都检测到目标,self.iou_threshold = iou_threshold#IOU阈值self.trackers = []#存储追踪器self.frame_count = 0#读取的帧数量def update(self, dets=np.empty((0, 5))):"""Params:dets - a numpy array of detections in the format [[x1,y1,x2,y2,score],[x1,y1,x2,y2,score],...]Requires: this method must be called once for each frame even with empty detections (use np.empty((0, 5)) for frames without detections).Returns the a similar array, where the last column is the object ID.NOTE: The number of objects returned may differ from the number of detections provided."""self.frame_count += 1# get predicted locations from existing trackers.trks = np.zeros((len(self.trackers), 5))#空的[]ret = []for t, trk in enumerate(trks):#第一帧时,里面没有东西,直接跳过pos = self.trackers[t].predict()[0]trk[:] = [pos[0], pos[1], pos[2], pos[3], 0]#numpy.ma.masked_invalid屏蔽出现无效值的数组(NaN或inf;numpy.ma.compress_rows压缩包含掩码值的2-D 数组的整行。trks = np.ma.compress_rows(np.ma.masked_invalid(trks))matched, unmatched_dets, unmatched_trks = associate_detections_to_tracks(dets, trks, self.iou_threshold)#关联检测框与轨迹,在第一帧时,只有unmatched_dets# update matched trackers with assigned detectionsfor m in matched:#如果有匹配上的则利用刚刚检测的结果来更新,即用于卡尔曼滤波预测,第一帧时不执行self.trackers[m[1]].update(dets[m[0], :])# create and initialise new trackers for unmatched detectionsfor i in unmatched_dets:#第一帧时,没有匹配的轨迹,则创建对应的轨迹,一个一个的进行创建trk = KalmanBoxTracker(dets[i,:])self.trackers.append(trk)i = len(self.trackers)#自后向前遍历,仅返回在当前帧出现且命中周期大于self.min_hits(除非跟踪刚开始)的跟踪结果;如果未命中时间大于self.max_age则删除跟踪器。for trk in reversed(self.trackers):d = trk.get_state()[0]if (trk.time_since_update < 1) and (trk.hit_streak >= self.min_hits or self.frame_count <= self.min_hits): #hit_streak:忽略目标初始的若干帧ret.append(np.concatenate((d,[trk.id+1])).reshape(1,-1)) # +1 as MOT benchmark requires positivei -= 1if(trk.time_since_update > self.max_age):self.trackers.pop(i)if(len(ret)>0):return np.concatenate(ret)return np.empty((0,5))
def scale_boxes(input_shape, boxes, shape):# Rescale boxes (xyxy) from input_shape to shapegain = min(input_shape[0] / shape[0], input_shape[1] / shape[1])  # gain  = old / newpad = (input_shape[1] - shape[1] * gain) / 2, (input_shape[0] - shape[0] * gain) / 2  # wh paddingboxes[..., [0, 2]] =boxes[..., [0, 2]]- pad[0]  # x paddingboxes[..., [1, 3]] =boxes[..., [1, 3]] - pad[1]  # y paddingboxes[..., :4] =boxes[..., :4] / gainboxes[..., [0, 2]] = boxes[..., [0, 2]].clip(0, shape[1])  # x1, x2boxes[..., [1, 3]] = boxes[..., [1, 3]].clip(0, shape[0])  # y1, y2return boxesif __name__ == '__main__':import onnxruntime as ortimport cv2from PIL import Imageimport torchfrom torchvision.transforms import ToTensormot_tracker = Sort(max_age=1, min_hits=3, iou_threshold=0.3) #create instance of the SORT trackercolours = np.random.rand(32, 3) * 255sess= ort.InferenceSession("detr.onnx", None)input_path="video.mp4"cap = cv2.VideoCapture(input_path)size = torch.tensor([[640, 640]])images=[]thrh = 0.6while True:_, image_ori = cap.read()if image_ori is None:breakw,h,c=image_ori.shapeimage=cv2.resize(image_ori,(640,640))img = Image.fromarray(image)img = img.convert('RGB')im_data = ToTensor()(img)[None]output = sess.run(output_names=['labels', 'boxes', 'scores'],#output_names=None,input_feed={'images': im_data.data.numpy(), "orig_target_sizes": size.data.numpy()},)cls,outbox,score=output#outbox=scale_boxes((h,w),outbox,(640,640))outbox=np.squeeze(outbox)#outbox = outbox[np.lexsort(outbox[:,::-1].T)]boxindex=np.where(score>thrh)outbox=outbox[boxindex[1]]trackers = mot_tracker.update(outbox)for d in trackers:d = d.astype(np.int32)cv2.rectangle(image, (d[0], d[1]), (d[2], d[3]), colours[d[4]%32,:], 1)cv2.putText(image,str(d[4]),(d[0], d[1]),3,1,(255,0,0))images.append(image)cv2.waitKey(50)imageio.mimsave('output.gif',images,fps=30)cap.release()

这篇关于RT-DETR+Sort 实现目标跟踪的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1129101

相关文章

usaco 1.3 Mixing Milk (结构体排序 qsort) and hdu 2020(sort)

到了这题学会了结构体排序 于是回去修改了 1.2 milking cows 的算法~ 结构体排序核心: 1.结构体定义 struct Milk{int price;int milks;}milk[5000]; 2.自定义的比较函数,若返回值为正,qsort 函数判定a>b ;为负,a<b;为0,a==b; int milkcmp(const void *va,c

Verybot之OpenCV应用三:色标跟踪

下面的这个应用主要完成的是Verybot跟踪色标的功能,识别部分还是居于OpenCV编写,色标跟踪一般需要将图像的颜色模式进行转换,将RGB转换为HSV,因为对HSV格式下的图像进行识别时受光线的影响比较小,但是也有采用RGB模式来进行识别的情况,这种情况一般光线条件比较固定,背景跟识别物在颜色上很容易区分出来。         下面这个程序的流程大致是这样的:

Linux内置的审计跟踪工具:last命令

如果你是一个服务器管理员,你或许知道你要保护你的服务器的话,不仅是从外部,还要从内部保护。Linux有一个内置工具来看到最后登陆服务器的用户,可以帮助你保护服务器。   这个命令是last。它对于追踪非常有用。让我们来看一下last可以为你做些什么。   last命令的功能是什么   last显示的是自/var/log/wtmp文件创建起所有登录(和登出)的用户。这个文件是二进制

YOLOv8/v10+DeepSORT多目标车辆跟踪(车辆检测/跟踪/车辆计数/测速/禁停区域/绘制进出线/绘制禁停区域/车道车辆统计)

01:YOLOv8 + DeepSort 车辆跟踪 该项目利用YOLOv8作为目标检测模型,DeepSort用于多目标跟踪。YOLOv8负责从视频帧中检测出车辆的位置,而DeepSort则负责关联这些检测结果,从而实现车辆的持续跟踪。这种组合使得系统能够在视频流中准确地识别并跟随特定车辆。 02:YOLOv8 + DeepSort 车辆跟踪 + 任意绘制进出线 在此基础上增加了用户

stl的sort和手写快排的运行效率哪个比较高?

STL的sort必然要比你自己写的快排要快,因为你自己手写一个这么复杂的sort,那就太闲了。STL的sort是尽量让复杂度维持在O(N log N)的,因此就有了各种的Hybrid sort algorithm。 题主你提到的先quicksort到一定深度之后就转为heapsort,这种是introsort。 每种STL实现使用的算法各有不同,GNU Standard C++ Lib

RT-Thread(Nano版本)的快速移植(基于NUCLEO-F446RE)

目录 概述 1 RT-Thread 1.1 RT-Thread的版本  1.2 认识Nano版本 2 STM32F446U上移植RT-Thread  2.1 STM32Cube创建工程 2.2 移植RT-Thread 2.2.1 安装RT-Thread Packet  2.2.2 加载RT-Thread 2.2.3 匹配相关接口 2.2.3.1 初次编译代码  2.2.3.

目标检测-RT-DETR

RT-DETR (Real-Time Detection Transformer) 是一种结合了 Transformer 和实时目标检测的创新模型架构。它旨在解决现有目标检测模型在速度和精度之间的权衡问题,通过引入高效的 Transformer 模块和优化的检测头,提升了模型的实时性和准确性。RT-DETR 可以直接用于端到端目标检测,省去了锚框设计,并且在推理阶段具有较高的速度。 RT-DET

深度学习 目标分类 目标检测 多目标跟踪 基础 进阶

深度学习 目标分类 目标检测 多目标跟踪 基础 进阶 flyfish 深度学习基础 文章名称链接深度学习基础 - 直线链接深度学习基础 - 梯度垂直于等高线的切线链接深度学习基础 - 向量投影链接一阶优化算法(如梯度下降)和二阶优化算法(如牛顿法)与泰勒级数链接Eigen中的array() square() asDiagonal()链接普通的矩阵乘法和Strassen矩阵乘法算法的比较 代码

基于 rt-thread的I2C操作EEPROM(AT24C02)

一、AT24C02 The AT24C01A/02/04/08A/16A provides 1024/2048/4096/8192/16384 bits of serial electrically erasable and programmable read-only memory (EEPROM) organized as 128/256/512/1024/2048 words of 8 b

Android中使用eBPF跟踪 FD打开与关闭

我们知道在Android系统中 fd 泄露,可以通过 google 开发的 fdtrack来进行排查,但是有些情况下我们想在外发release版本去监控fd泄露情况,fdtrack就不能很好的满足需求了。可以用eBPF去监控fd泄露。 Android中使用eBPF跟踪 FD打开与关闭