yolov8-seg dnn调用

2024-02-24 05:52
文章标签 yolov8 调用 dnn seg

本文主要是介绍yolov8-seg dnn调用,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

接上篇一直更换torch、opencv版本都无法解决这个问题(seg调用dnn报错)。那问题会不会出在yolov8源码本身呢。yolov8的讨论区基本都看过了,我决定尝试在其前身yolov5的讨论区上找找我不信没人遇到这个问题。很快找到下面的讨论第一个帖子:

Fix infer yolov5-seg.onnx with opencv-dnn error by UNeedCryDear · Pull Request #9645 · ultralytics/yolov5 · GitHub

按照大佬提供的如下代码快速尝试了问题:

!git clone https://github.com/UNeedCryDear/yolov5 -b master # clone
%cd yolov5
%pip install -r requirements.txt  # install(-qr改为-r 可能是笔误)!python export.py --weights yolov5s-seg.pt --include onnx
!python segment/predict.py --weights yolov5s-seg.onnx --dnn
###################################  the same error 
!pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio===0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111
! pip uninstall torchtext
!python export.py --weights yolov5s-seg.pt --include onnx
!python segment/predict.py --weights yolov5s-seg.onnx --dnn

他认为是torch的版本问题该了版本回1.8就没问题但是我运行的结果是还是一样报错:

默认版本不改推理如下:

python segment/predict.py --weights yolov5s-seg.onnx --dnn
segment/predict: weights=['yolov5s-seg.onnx'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/predict-seg, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=True, vid_stride=1, retina_masks=False
YOLOv5 🚀 v6.1-877-gdf48c20 Python-3.8.18 torch-2.2.0+cu121 CUDA:0 (Tesla T4, 14927MiB)Loading yolov5s-seg.onnx for ONNX OpenCV DNN inference...
Traceback (most recent call last):File "segment/predict.py", line 285, in <module>main(opt)File "segment/predict.py", line 280, in mainrun(**vars(opt))File "/home/inference/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_contextreturn func(*args, **kwargs)File "segment/predict.py", line 132, in runpred, proto = model(im, augment=augment, visualize=visualize)[:2]
ValueError: not enough values to unpack (expected 2, got 1)

改版本到1.8:

pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio===0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111

再次推理如下还是一样的报错:

python segment/predict.py --weights yolov5s-seg.onnx --dnn
segment/predict: weights=['yolov5s-seg.onnx'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/predict-seg, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=True, vid_stride=1, retina_masks=False
YOLOv5 🚀 v6.1-877-gdf48c20 Python-3.8.18 torch-1.8.2+cu111 CUDA:0 (Tesla T4, 14927MiB)Loading yolov5s-seg.onnx for ONNX OpenCV DNN inference...
Traceback (most recent call last):File "segment/predict.py", line 285, in <module>main(opt)File "segment/predict.py", line 280, in mainrun(**vars(opt))File "/home/inference/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_contextreturn func(*args, **kwargs)File "segment/predict.py", line 132, in runpred, proto = model(im, augment=augment, visualize=visualize)[:2]
ValueError: not enough values to unpack (expected 2, got 1)

真的我哭死,已经距离帖子发布的时间比较长了了,难道我要把相关库的版本都复原么,不死心再尝试找找,终于找到如下第二个帖子:Onnx inference not working for image instance segmentation, maybe a bug in ONNX model? · Issue #10578 · ultralytics/yolov5 · GitHubSearch before asking I have searched the YOLOv5 issues and discussions and found no similar questions. Question I have trained my model with Yolov7 at github, but cannot run the inherence (predict.py) without issues when exported to ONNX...icon-default.png?t=N7T8https://github.com/ultralytics/yolov5/issues/10578这个贴子的评论区还是上个帖子的UNeedCryDear 这个大佬提到的如下图:

这里针对dnn的推理结果在源码上做了改动,再次看了yolov5源码发现没做改动,我手动改下方便复制如下:

        elif self.dnn:  # ONNX OpenCV DNNim = im.cpu().numpy()  # torch to numpyself.net.setInput(im)output_layers = self.net.getUnconnectedOutLayersNames()if len(output_layers) == 2:output0, output1 = self.net.forward(output_layers)if len(output0.shape) < len(output1.shape):y = output0, output1else:y = output1, output0else:y = self.net.forward()

再次推理终于成功了如下:

python segment/predict.py --weights yolov5s-seg.onnx --dnn
segment/predict: weights=['yolov5s-seg.onnx'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/predict-seg, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=True, vid_stride=1, retina_masks=False
YOLOv5 🚀 v6.1-877-gdf48c20 Python-3.8.18 torch-1.8.2+cu111 CUDA:0 (Tesla T4, 14927MiB)Loading yolov5s-seg.onnx for ONNX OpenCV DNN inference...
image 1/2 /home/inference/yolov5/data/images/bus.jpg: 640x640 4 persons, 1 bus, 734.5ms
image 2/2 /home/inference/yolov5/data/images/zidane.jpg: 640x640 2 persons, 1 tie, 722.3ms
Speed: 0.6ms pre-process, 728.4ms inference, 111.8ms NMS per image at shape (1, 3, 640, 640)

无语了,原来yolov5的作者没处理UNeedCryDear这个大佬第一个帖子的合并请求。再看看yolov8的这段dnn推理代码果然没有同样的问题在https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/autobackend.py同样位置完成如yolov5那样的修改如下(方便和我一样的初学者理解我再写下,387行):

        elif self.dnn:  # ONNX OpenCV DNNim = im.cpu().numpy()  # torch to numpyself.net.setInput(im)output_layers = self.net.getUnconnectedOutLayersNames()if len(output_layers) == 2:output0, output1 = self.net.forward(output_layers)if len(output0.shape) < len(output1.shape):y = output0, output1else:y = output1, output0else:y = self.net.forward()

再次推理yolov8-seg的dnn依旧是报错如下:

yolo predict task=segment model=yolov8n-seg.onnx imgsz=640 dnn
WARNING ⚠️ 'source' is missing. Using default 'source=/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets'.
Ultralytics YOLOv8.1.17 🚀 Python-3.9.18 torch-1.11.0+cu102 CUDA:0 (Tesla T4, 14927MiB)
Loading yolov8n-seg.onnx for ONNX OpenCV DNN inference...
WARNING ⚠️ Metadata not found for 'model=yolov8n-seg.onnx'Traceback (most recent call last):File "/home/inference/miniconda3/envs/yolov8v2/bin/yolo", line 8, in <module>sys.exit(entrypoint())File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/cfg/__init__.py", line 568, in entrypointgetattr(model, mode)(**overrides)  # default args from modelFile "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/model.py", line 429, in predictreturn self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 213, in predict_clifor _ in gen:  # noqa, running CLI inference without accumulating any outputs (do not modify)File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 43, in generator_contextresponse = gen.send(None)File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/engine/predictor.py", line 290, in stream_inferenceself.results = self.postprocess(preds, im, im0s)File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/models/yolo/segment/predict.py", line 30, in postprocessp = ops.non_max_suppression(File "/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/utils/ops.py", line 230, in non_max_suppressionoutput = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
RuntimeError: Trying to create tensor with negative dimension -881: [0, -881]

但与cv2.dnn.readNetFromONNX读取yolov8的onnx报错解决过程_opencvsharp.dnn.net.readnetfromonnx(onnxfile);-CSDN博客文章浏览阅读479次,点赞5次,收藏7次。找到解决方法如下转换时要设置(关键是添加opset=11)上述是尝试用opencv读取模型时的报错信息。_opencvsharp.dnn.net.readnetfromonnx(onnxfile);https://blog.csdn.net/qq_36401512/article/details/136189767?spm=1001.2014.3001.5501里面报错不一致了dimension -837: [0, -837]改为了dimension -881: [0, -881]了,肯定哪里还要做调整。

用如下源码进行调是对别(dnn调用还是onnxruntime调用,pt先转onnx):

# -*-coding:utf-8-*-
from ultralytics import YOLO
model = YOLO("/home/inference/Amplitudemode_AI/all_model_and_pred/AI_Ribfrac_ths/yolov8n-seg.onnx")  # 模型加载
results = model.predict(source='/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets', imgsz=640, dnn=True, save=True, boxes=False)  # save plotted images 保存绘制图片

用dnn=True or False 控制,最终确认是https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/ops.py 里215行的问题

nc = nc or (prediction.shape[1] - 4)  # number of classes

再细看就是Metadata这个字典的问题导致类别数量错误,也就是下面的警告:

WARNING ⚠️ Metadata not found for 'model=/home/inference/Amplitudemode_AI/all_model_and_pred/AI_Ribfrac_ths/yolov8n-seg.onnx'

我根据onnxruntime调用的结构抄写一个为保存为metadata.yaml内容如下:

names:0: person1: bicycle2: car3: motorcycle4: airplane5: bus6: train7: truck8: boat9: traffic light10: fire hydrant11: stop sign12: parking meter13: bench14: bird15: cat16: dog17: horse18: sheep19: cow20: elephant21: bear22: zebra23: giraffe24: backpack25: umbrella26: handbag27: tie28: suitcase29: frisbee30: skis31: snowboard32: sports ball33: kite34: baseball bat35: baseball glove36: skateboard37: surfboard38: tennis racket39: bottle40: wine glass41: cup42: fork43: knife44: spoon45: bowl46: banana47: apple48: sandwich49: orange50: broccoli51: carrot52: hot dog53: pizza54: donut55: cake56: chair57: couch58: potted plant59: bed60: dining table61: toilet62: tv63: laptop64: mouse65: remote66: keyboard67: cell phone68: microwave69: oven70: toaster71: sink72: refrigerator73: book74: clock75: vase76: scissors77: teddy bear78: hair drier79: toothbrushtask: segment
stride: 32
imgsz: [640,640]
batch: 1

放到与onnx模型统一目录下,修改代码https://github.com/ultralytics/ultralytics/blob/main/ultralytics/nn/autobackend.py168行:

        elif dnn:  # ONNX OpenCV DNNLOGGER.info(f"Loading {w} for ONNX OpenCV DNN inference...")check_requirements("opencv-python>=4.5.4")net = cv2.dnn.readNetFromONNX(w)metadata = Path(w).parent / "metadata.yaml"

再次推理分割模型结果如下:

yolo predict task=segment model=yolov8n-seg.onnx imgsz=640 dnn
WARNING ⚠️ 'source' is missing. Using default 'source=/home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets'.
Ultralytics YOLOv8.1.17 🚀 Python-3.9.18 torch-1.11.0+cu102 CUDA:0 (Tesla T4, 14927MiB)
Loading yolov8n-seg.onnx for ONNX OpenCV DNN inference...image 1/2 /home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets/bus.jpg: 640x640 4 persons, 1 bus, 1 skateboard, 304.4ms
image 2/2 /home/inference/miniconda3/envs/yolov8v2/lib/python3.9/site-packages/ultralytics/assets/zidane.jpg: 640x640 2 persons, 2 ties, 309.0ms
Speed: 2.3ms preprocess, 306.7ms inference, 2.4ms postprocess per image at shape (1, 3, 640, 640)
Results saved to runs/segment/predict21
💡 Learn more at https://docs.ultralytics.com/modes/predict

终于完结了,虽然耗费了比较多的时间。但是大致理解了yolov8推理代码的整理逻辑和部分细节获益匪浅。

这篇关于yolov8-seg dnn调用的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/741145

相关文章

Java调用Python代码的几种方法小结

《Java调用Python代码的几种方法小结》Python语言有丰富的系统管理、数据处理、统计类软件包,因此从java应用中调用Python代码的需求很常见、实用,本文介绍几种方法从java调用Pyt... 目录引言Java core使用ProcessBuilder使用Java脚本引擎总结引言python

java如何调用kettle设置变量和参数

《java如何调用kettle设置变量和参数》文章简要介绍了如何在Java中调用Kettle,并重点讨论了变量和参数的区别,以及在Java代码中如何正确设置和使用这些变量,避免覆盖Kettle中已设置... 目录Java调用kettle设置变量和参数java代码中变量会覆盖kettle里面设置的变量总结ja

如何在页面调用utility bar并传递参数至lwc组件

1.在app的utility item中添加lwc组件: 2.调用utility bar api的方式有两种: 方法一,通过lwc调用: import {LightningElement,api ,wire } from 'lwc';import { publish, MessageContext } from 'lightning/messageService';import Ca

【LabVIEW学习篇 - 21】:DLL与API的调用

文章目录 DLL与API调用DLLAPIDLL的调用 DLL与API调用 LabVIEW虽然已经足够强大,但不同的语言在不同领域都有着自己的优势,为了强强联合,LabVIEW提供了强大的外部程序接口能力,包括DLL、CIN(C语言接口)、ActiveX、.NET、MATLAB等等。通过DLL可以使用户很方便地调用C、C++、C#、VB等编程语言写的程序以及windows自带的大

string字符会调用new分配堆内存吗

gcc的string默认大小是32个字节,字符串小于等于15直接保存在栈上,超过之后才会使用new分配。

京东物流查询|开发者调用API接口实现

快递聚合查询的优势 1、高效整合多种快递信息。2、实时动态更新。3、自动化管理流程。 聚合国内外1500家快递公司的物流信息查询服务,使用API接口查询京东物流的便捷步骤,首先选择专业的数据平台的快递API接口:物流快递查询API接口-单号查询API - 探数数据 以下示例是参考的示例代码: import requestsurl = "http://api.tanshuapi.com/a

vue 父组件调用子组件的方法报错,“TypeError: Cannot read property ‘subDialogRef‘ of undefined“

vue 父组件调用子组件的方法报错,“TypeError: Cannot read property ‘subDialogRef’ of undefined” 最近用vue做的一个界面,引入了一个子组件,在父组件中调用子组件的方法时,报错提示: [Vue warn]: Error in v-on handler: “TypeError: Cannot read property ‘methods

【微服务】Ribbon(负载均衡,服务调用)+ OpenFeign(服务发现,远程调用)【详解】

文章目录 1.Ribbon(负载均衡,服务调用)1.1问题引出1.2 Ribbon负载均衡1.3 RestTemplate整合Ribbon1.4 指定Ribbon负载均衡策略1.4.1 配置文件1.4.2 配置类1.4.3 定义Ribbon客户端配置1.4.4 自定义负载均衡策略 2.OpenFeign面向接口的服务调用(服务发现,远程调用)2.1 OpenFeign的使用2.1 .1创建

类和对象的定义和调用演示(C++)

我习惯把类的定义放在头文件中 Student.h #define _CRT_SECURE_NO_WARNINGS#include <string>using namespace std;class student{public:char m_name[25];int m_age;int m_score;char* get_name(){return m_name;}int set_name