本文主要是介绍YOLOV5 + 双目相机实现三维测距(新版本),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
文章目录
- YOLOV5 + 双目相机实现三维测距(新版本)
- 1. 项目流程
- 2. 测距原理
- 3. 操作步骤和代码解析
- 4. 实时检测
- 5. 训练
- 6. 源码下载
YOLOV5 + 双目相机实现三维测距(新版本)
本文主要是对此篇文章做一些改进,以及解释读者在复现过程中遇到的问题,完整代码在文章末尾
1. 项目流程
- YOLOv5检测目标并提取其中心像素点坐标
- 双目相机经过系列操作将像素点坐标转为深度三维坐标
- 根据三维坐标计算距离
- 将深度信息画图显示
2. 测距原理
如果想了解双目测距原理,请移步该文章 双目三维测距(python)
3. 操作步骤和代码解析
下载 yolov5 6.1版本源码 ,之前用的是5.0源码,代码太旧出现了不少问题,故更新了一下,创建一个detect-01.py
文件,文件里部分代码解析如下:
双目相机参数stereoconfig.py
双目相机标定误差越小越好,我这里误差为0.1,尽量使误差在0.2以下
import numpy as np
# 双目相机参数
class stereoCamera(object):def __init__(self):self.cam_matrix_left = np.array([[1101.89299, 0, 1119.89634],[0, 1100.75252, 636.75282],[0, 0, 1]])self.cam_matrix_right = np.array([[1091.11026, 0, 1117.16592],[0, 1090.53772, 633.28256],[0, 0, 1]])self.distortion_l = np.array([[-0.08369, 0.05367, -0.00138, -0.0009, 0]])self.distortion_r = np.array([[-0.09585, 0.07391, -0.00065, -0.00083, 0]])self.R = np.array([[1.0000, -0.000603116945856524, 0.00377055351856816],[0.000608108737333211, 1.0000, -0.00132288199083992],[-0.00376975166958581, 0.00132516525298933, 1.0000]])self.T = np.array([[-119.99423], [-0.22807], [0.18540]])self.baseline = 119.99423
测距代码部分解析
这一部分我直接计算了目标检测框中心点的深度值,把中心点的深度值当作了距离。你也可以写个循环,计算平均值或者中位数,把他们当作深度值
if (accel_frame % fps_set == 0):t3 = time.time() thread.join()points_3d = thread.get_result()t4 = time.time() a = points_3d[int(y_0), int(x_0), 0] / 1000b = points_3d[int(y_0), int(x_0), 1] / 1000c = points_3d[int(y_0), int(x_0), 2] / 1000dis = ((a**2+b**2+c**2)**0.5)
主代码detect-01.py
加入了多线程处理,加快处理速度
import argparse
import os
import sys
from pathlib import Pathimport cv2
import torch
import torch.backends.cudnn as cudnnFILE = Path(__file__).resolve()
ROOT = FILE.parents[0] # YOLOv5 root directory
if str(ROOT) not in sys.path:sys.path.append(str(ROOT)) # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relativefrom models.common import DetectMultiBackend
from utils.datasets import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr,increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
from utils.plots import Annotator, colors, save_one_box
from utils.torch_utils import select_device, time_sync
from stereo.dianyuntu_yolo import preprocess, undistortion, getRectifyTransform, draw_line, rectifyImage, \stereoMatchSGBM
from stereo import stereoconfig_040_2
from stereo.stereo import stereo_threading, MyThread@torch.no_grad()
def run(weights=ROOT / 'yolov5s.pt', # model.pt path(s)source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcamdata=ROOT / 'data/coco128.yaml', # dataset.yaml pathimgsz=(640, 640), # inference size (height, width)conf_thres=0.25, # confidence thresholdiou_thres=0.45, # NMS IOU thresholdmax_det=1000, # maximum detections per imagedevice='', # cuda device, i.e. 0 or 0,1,2,3 or cpuview_img=False, # show resultssave_txt=False, # save results to *.txtsave_conf=False, # save confidences in --save-txt labelssave_crop=False, # save cropped prediction boxesnosave=False, # do not save images/videosclasses=None, # filter by class: --class 0, or --class 0 2 3agnostic_nms=False, # class-agnostic NMSaugment=False, # augmented inferencevisualize=False, # visualize featuresupdate=False, # update all modelsproject=ROOT / 'runs/detect', # save results to project/namename='exp', # save results to project/nameexist_ok=False, # existing project/name ok, do not incrementline_thickness=3, # bounding box thickness (pixels)hide_labels=False, # hide labelshide_conf=False, # hide confidenceshalf=False, # use FP16 half-precision inferencednn=False, # use OpenCV DNN for ONNX inference):source = str(source)save_img = not nosave and not source.endswith('.txt') # save inference imagesis_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)if is_url and is_file:source = check_file(source) # download# Directoriessave_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir# Load modeldevice = select_device(device)model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data)stride, names, pt, jit, onnx, engine = model.stride, model.names, model.pt, model.jit, model.onnx, model.engineimgsz = check_img_size(imgsz, s=stride) # check image size# Halfhalf &= (pt or jit or onnx or engine) and device.type != 'cpu' # FP16 supported on limited backends with CUDAif pt or jit:model.model.half() if half else model.model.float()# Dataloaderif webcam:view_img = check_imshow()cudnn.benchmark = True # set True to speed up constant image size inferencedataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)bs = len(dataset) # batch_sizeelse:dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)bs = 1 # batch_sizevid_path, vid_writer = [None] * bs, [None] * bs# Run inferencemodel.warmup(imgsz=(1 if pt else bs, 3, *imgsz), half=half) # warmupdt, seen = [0.0, 0.0, 0.0], 0config = stereoconfig_040_2.stereoCamera()# 立体校正map1x, map1y, map2x, map2y, Q = getRectifyTransform(720, 1280, config)for path, im, im0s, vid_cap, s in dataset:t1 = time_sync()im = torch.from_numpy(im).to(device)im = im.half() if half else im.float() # uint8 to fp16/32im /= 255 # 0 - 255 to 0.0 - 1.0if len(im.shape) == 3:im = im[None] # expand for batch dimt2 = time_sync()dt[0] += t2 - t1# Inferencevisualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else Falsepred = model(im, augment=augment, visualize=visualize)t3 = time_sync()dt[1] += t3 - t2# NMSpred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)dt[2] += time_sync() - t3# Second-stage classifier (optional)# pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)# Process predictionsfor i, det in enumerate(pred): # per imageseen += 1if webcam: # batch_size >= 1p, im0, frame = path[i], im0s[i].copy(), dataset.counts += f'{i}: 'else:p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
################################################ start ##############################################thread = MyThread(stereo_threading, args=(config, im0, map1x, map1y, map2x, map2y, Q))thread.start()p = Path(p) # to Pathsave_path = str(save_dir / p.name) # im.jpgtxt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txts += '%gx%g ' % im.shape[2:] # print stringgn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwhimc = im0.copy() if save_crop else im0 # for save_cropannotator = Annotator(im0, line_width=line_thickness, example=str(names))if len(det):# Rescale boxes from img_size to im0 sizedet[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()# Print resultsfor c in det[:, -1].unique():n = (det[:, -1] == c).sum() # detections per classs += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string# Write resultsfor *xyxy, conf, cls in reversed(det):if (0 < xyxy[2] < 1280):if save_txt: # Write to filexywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywhline = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label formatwith open(txt_path + '.txt', 'a') as f:f.write(('%g ' * len(line)).rstrip() % line + '\n')if save_img or save_crop or view_img: # Add bbox to imagex_center = (xyxy[0] + xyxy[2]) / 2y_center = (xyxy[1] + xyxy[3]) / 2x_0 = int(x_center)y_0 = int(y_center)if (0 < x_0 < 1280):x1 = xyxy[0]x2 = xyxy[2]y1 = xyxy[1]y2 = xyxy[3]thread.join()points_3d = thread.get_result()a = points_3d[int(y_0), int(x_0), 0] / 1000b = points_3d[int(y_0), int(x_0), 1] / 1000c = points_3d[int(y_0), int(x_0), 2] / 1000distance = ((a ** 2 + b ** 2 + c ** 2) ** 0.5)if (distance != 0): ## Add bbox to imagec = int(cls) # integer classlabel = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')annotator.box_label(xyxy, label, color=colors(c, True))print('点 (%d, %d) 的 %s 距离左摄像头的相对距离为 %0.2f m' % (x_center, y_center, label, distance))text_dis_avg = "dis:%0.2fm" % distance# only put dis on framecv2.putText(im0, text_dis_avg, (int(x1 + (x2 - x1) + 5), int(y1 + 30)), cv2.FONT_ITALIC,1.2, (255, 255, 255), 3)if save_crop:save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)# Stream resultsim0 = annotator.result()if view_img:cv2.namedWindow("Webcam", cv2.WINDOW_NORMAL)cv2.resizeWindow("Webcam", 1280, 480)cv2.moveWindow("Webcam", 0, 100)cv2.imshow("Webcam", im0)cv2.waitKey(1)# cv2.imshow(str(p), im0)# cv2.waitKey(1) # 1 millisecond# Save results (image with detections)if save_img:if dataset.mode == 'image':cv2.imwrite(save_path, im0)else: # 'video' or 'stream'if vid_path[i] != save_path: # new videovid_path[i] = save_pathif isinstance(vid_writer[i], cv2.VideoWriter):vid_writer[i].release() # release previous video writerif vid_cap: # videofps = vid_cap.get(cv2.CAP_PROP_FPS)w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))else: # streamfps, w, h = 30, im0.shape[1], im0.shape[0]save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videosvid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))vid_writer[i].write(im0)# Print time (inference-only)LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')# Print resultst = tuple(x / seen * 1E3 for x in dt) # speeds per imageLOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)if save_txt or save_img:s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")if update:strip_optimizer(weights) # update model (to fix SourceChangeWarning)def parse_opt():parser = argparse.ArgumentParser()parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')parser.add_argument('--source', type=str, default=ROOT / 'data/images/a1.mp4', help='file/dir/URL/glob, 0 for webcam')parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')parser.add_argument('--view-img', action='store_true', help='show results')parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')parser.add_argument('--nosave', action='store_true', help='do not save images/videos')parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')parser.add_argument('--augment', action='store_true', help='augmented inference')parser.add_argument('--visualize', action='store_true', help='visualize features')parser.add_argument('--update', action='store_true', help='update all models')parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')parser.add_argument('--name', default='exp', help='save results to project/name')parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')opt = parser.parse_args()opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expandprint_args(FILE.stem, opt)return optdef main(opt):check_requirements(exclude=('tensorboard', 'thop'))run(**vars(opt))if __name__ == "__main__":opt = parse_opt()main(opt)
执行detect-01.py
,检测结果如下:
4. 实时检测
(1)如想要调用摄像头检测,直接把detect-01.py
里的
parser.add_argument('--source', type=str, default=ROOT / 'data/images/a1.mp4', help='file/dir/URL/glob, 0 for webcam')
改为
parser.add_argument('--source', type=str, default=ROOT / '0')
(2)需要注意的是,代码设置的是检测分辨率为2560x720大小的图或者视频,直接调用摄像头,摄像头分辨率不一定为2560x720,因此需要设定一下打开摄像头默认分辨率
打开utils/dataset.py
文件,找到class LoadStreams:
这个类
将
for i, s in enumerate(sources): # index, source# Start thread to read frames from video streamst = f'{i + 1}/{n}: {s}... 'if 'youtube.com/' in s or 'youtu.be/' in s: # if source is YouTube videocheck_requirements(('pafy', 'youtube_dl==2020.12.2'))import pafys = pafy.new(s).getbest(preftype="mp4").url # YouTube URLs = eval(s) if s.isnumeric() else s # i.e. s = '0' local webcamcap = cv2.VideoCapture(s)assert cap.isOpened(), f'{st}Failed to open {s}'w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))fps = cap.get(cv2.CAP_PROP_FPS) # warning: may return 0 or nanself.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf') # infinite stream fallbackself.fps[i] = max((fps if math.isfinite(fps) else 0) % 100, 0) or 30 # 30 FPS fallback_, self.imgs[i] = cap.read() # guarantee first frameself.threads[i] = Thread(target=self.update, args=([i, cap, s]), daemon=True)LOGGER.info(f"{st} Success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)")self.threads[i].start()
LOGGER.info('') # newline
改为
for i, s in enumerate(sources):# Start the thread to read frames from the video streamprint(f'{i + 1}/{n}: {s}... ', end='')cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s)####################################################################################################imageWidth = 2560imageHeight = 720cap.set(cv2.CAP_PROP_FRAME_WIDTH, imageWidth)cap.set(cv2.CAP_PROP_FRAME_HEIGHT, imageHeight)assert cap.isOpened(), f'Failed to open {s}'w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))fps = 24#cap.get(cv2.CAP_PROP_FPS) % 100_, self.imgs[i] = cap.read() # guarantee first framethread = Thread(target=self.update, args=([i, cap]), daemon=True)print(f' success ({w}x{h} at {fps:.2f} FPS).')thread.start()print('') # newline
这样就设置好了
5. 训练
数据集采用YOLO格式,目录如下:
dataset|coco|images|train2017|1.jpg2.jpgval2017|11.jpg22.jpglabels|train2017|1.txt2.txtval2017|11.txt22.txt
打开data/coco.yaml文件,把里边的内容修改如下(这里训练两个类别)
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
# └── coco128 ← downloads here
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ./dataset/coco # dataset root dir
train: images/train2017 # train images (relative to 'path') 128 images
val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional)# Classes
nc: 2 # number of classes
names: ['person', 'bicycle'] # class names
同时把训练用的model/yolov5s.yaml文件的类别改成与上边对应的类别数,接下来运行train.py
即可
6. 源码下载
下载链接:https://download.csdn.net/download/qq_45077760/89136394
这篇关于YOLOV5 + 双目相机实现三维测距(新版本)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!