本文主要是介绍Monocular arbitrary moving object discovery and segmentation 代码复现,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
环境
https://github.com/michalneoral/Raptor
1.创建environment.yaml
name: raptor
channels:- pytorch- conda-forge
dependencies:- python=3.8- pytorch=1.9.0- torchvision=0.10.0- cudatoolkit=11.1- pip
conda env create -f environment.yaml
conda activate raptor
2.安装mmcv
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install mmdet==2.20.0
3.Raptor安装
安装前建议看一看setup.py 设置一下库的版本
from setuptools import setup, find_packagesprint(find_packages())setup(name='raptor',version='1.0',description='Raptor code by Michal Neoral',packages=find_packages(),install_requires=['tqdm','opencv-python==4.5.5.62','imageio==2.13.5','wandb','pypng==0.0.20','gdown==4.5.1','kornia==0.5.3','timm==0.8.2.dev0',
# 'numpy>=1.21',
# 'nptyping',]#,
)
git clone https://github.com/michalneoral/Raptor
cd Raptor
pip install -v -e .
4.RAFT修改版安装
git clone https://github.com/michalneoral/raft_fork.git
cd raft_fork
pip install -v -e .
5.下载模型
cd raptor/weights
bash download_weights.sh
数据集
1.KITTI’15 Dataset Instance Motion Segmentation Extension
cd YOUR_kitti15_basic_training_OR_multiview_training_directory
gdown https://drive.google.com/uc?id=1tOvHlFfL0cNIVJNjg2iC6JXyXmELoH4R -O ./KITTI15_MSplus.zip
unzip KITTI15_MSplus.zip
数据结构
├── readme.txt
│ └── file with link to this repo
├── motion_segmentation_coco_format/
│ └── directory contating all evaluation COCO-format files for both KITTI'15 and KITTI'15 IMS Extension
├── obj_map_fg/
│ └── binary *.png maps for foreground/background segmentation - all moving instances as foreground
├── obj_map_moseg/
│ └── instance motion segmentation *.png maps
└── obj_map_valid/└── binary *.png masks of motion segmenation - motion edges and areas excluded by annotator
2.DAVIS_Moving - COCO format
cd YOUR_davis_directory
gdown https://drive.google.com/uc?id=1gwet3yr7PVPVGcPUxKKounUuCRA39GA3 -O ./DAVISMoving_coco.zip
unzip DAVISMoving_coco.zip
数据格式
├── readme.txt
│ └── file with link to this repo
└── motion_segmentation_coco_format/└── directory contating all evaluation COCO-format files
测试
1.查看运行参数设定
python demo/inference_demo_raptor_sequence.py --help
这样说明环境安装好了
2.写一个run.sh脚本来运行(对应自己的数据集路径和输出路径)
python demo/inference_demo_raptor_sequence.py \--gpuid 2 \--config_file configs/raptor/raptor.py \--checkpoint_file weights/raptor_bmvc_2021.pth \--save_custom_outputs \--save_outputs \--input_dict /data/qinl/Dataset/DAVIS/DAVIS/JPEGImages/480p/bear \--output_dict result
测试结果
如果要处理数据集下的子集则需要使用如下格式
python demo/inference_demo_raptor_sequence.py
--gpuid #YOUR_GPU_NUMBER
--config_file configs/moseg/raptor.py
--checkpoint_file weights/raptor_bmvc_2021.pth
--save_custom_outputs
--save_outputs
--search_subdirs
--input_dict #PATH_TO_INPUT_DIRECTORY_WITH_SEQUENCE_OF_IMAGES_DIRECTORIES
--output_dict #PATH_TO_OUTPUT_DIRECTORY
这篇关于Monocular arbitrary moving object discovery and segmentation 代码复现的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!