使用 KITTI数据集训练YOLOX

2024-02-06 06:52
文章标签 数据 使用 训练 kitti yolox

本文主要是介绍使用 KITTI数据集训练YOLOX,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1. 现在KITTI集后,首先将数据集转换为COCO数据集格式。

kitti_vis.py

import os
from pathlib import Path
import numpy as np
import cv2def anno_vis(img, anno_list):for anno in anno_list:points = np.array(anno[4:8], dtype=np.float32)cv2.rectangle(img, (int(points[0]),int(points[1])), (int(points[2]),int(points[3])), (0, 0, 255), 2)cv2.putText(img, anno[0], (int(points[0]),int(points[1])), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)cv2.imshow('img', img)ret = cv2.waitKey(0)if ret == 27:exit(0)if __name__ == '__main__':img_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Image\training\image_2')label_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Label\training\label_2')img_list = os.listdir(img_root)for img_name in img_list[:5]:img_name = Path(img_name)label_name = img_name.with_suffix('.txt')img = cv2.imread(str(img_root/img_name))with open(label_root/label_name) as f:l = [x.split() for x in f.read().strip().splitlines()]anno_vis(img, l)

 kitti_split.py

'''
用于将KITTI数据集的7000多张训练集分为:前4000张为训练集,4000-6000张为验证集,剩余为测试集
运行命令:
python ./tools/kitti_split.py --source_img_path ./KITTI_origin/training/image_2 --source_label_path ./KITTI_origin/training/label_2/
--dst_img_path ./KITTI_YOLOX/img --dst_label_path ./KITTI_YOLOX/label# img_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Image\training\image_2')# label_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Label\training\label_2')'''import os
import argparse
from pathlib import Path
import shutil
from tqdm import tqdm
from loguru import loggerdef make_parser():parser = argparse.ArgumentParser("")   parser.add_argument('--source_img_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Image\training\image_2',  help="Specify original kitti img path")parser.add_argument('--source_label_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Label\training\label_2',help="Specify original kitti label path")parser.add_argument('--dst_img_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img',help="Specify splited kitti img path")parser.add_argument('--dst_label_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label',help="Specify splited kitti label path")return parserdef check_dir(dir):if Path(dir).is_dir() == False:Path(dir).mkdir(parents=True, exist_ok=True)logger.info('Created %s' % dir)if __name__ == '__main__':args = make_parser().parse_args()img_root = Path(args.source_img_path)label_root = Path(args.source_label_path)# img_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Image\training\image_2')# label_root = Path(r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI-train_test-Label\training\label_2')img_list = os.listdir(img_root)dst_train_img_root = Path(args.dst_img_path)/'train'dst_val_img_root = Path(args.dst_img_path)/'val'dst_test_img_root = Path(args.dst_img_path)/'test'dst_train_label_root = Path(args.dst_label_path)/'train'dst_val_label_root = Path(args.dst_label_path)/'val'dst_test_label_root = Path(args.dst_label_path)/'test'check_dir(dst_train_img_root)check_dir(dst_val_img_root)check_dir(dst_test_img_root)check_dir(dst_train_label_root)check_dir(dst_val_label_root)check_dir(dst_test_label_root)for img_name in tqdm(img_list):if int(Path(img_name).stem) < 4000:shutil.copyfile(img_root/img_name, dst_train_img_root/img_name)shutil.copyfile(label_root/(Path(img_name).with_suffix('.txt')), dst_train_label_root/(Path(img_name).with_suffix('.txt')))elif int(Path(img_name).stem) < 6000:shutil.copyfile(img_root/img_name, dst_val_img_root/img_name)shutil.copyfile(label_root/(Path(img_name).with_suffix('.txt')), dst_val_label_root/(Path(img_name).with_suffix('.txt')))else:shutil.copyfile(img_root/img_name, dst_test_img_root/img_name)shutil.copyfile(label_root/(Path(img_name).with_suffix('.txt')), dst_test_label_root/(Path(img_name).with_suffix('.txt')))

kitti2coco.py

'''
KITTI标注转COCO标注运行命令:(1)训练集:python tools/kitti2coco.py --img_path ./KITTI_YOLOX/img/train --label_path ./KITTI_YOLOX/label/train --dst_json ./train.json
(2)验证集:python tools/kitti2coco.py --img_path ./KITTI_YOLOX/img/val --label_path ./KITTI_YOLOX/label/val --dst_json ./val.json
(3)测试集:python tools/kitti2coco.py --img_path ./KITTI_YOLOX/img/test --label_path ./KITTI_YOLOX/label/test --dst_json ./test.json
'''
import os
import json
import argparse
from pathlib import Path
import cv2
from tqdm import tqdm# parser.add_argument('--dst_img_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img',
#                     help="Specify splited kitti img path")
# parser.add_argument('--dst_label_path', default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label',
#                     help="Specify splited kitti label path")def make_parser():# parser = argparse.ArgumentParser("Kitti to COCO format")# parser.add_argument('--img_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img\train',#                                                     help='Specify img path')# parser.add_argument('--label_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label\train',#                     help='Specify label path')# parser.add_argument('--dst_json', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\train.json', help='Specify generated json file name')# parser = argparse.ArgumentParser("Kitti to COCO format")# parser.add_argument('--img_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img\test',#                                                     help='Specify img path')# parser.add_argument('--label_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label\test',#                     help='Specify label path')# parser.add_argument('--dst_json', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\test.json', help='Specify generated json file name')#parser = argparse.ArgumentParser("Kitti to COCO format")parser.add_argument('--img_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img\val',help='Specify img path')parser.add_argument('--label_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label\val',help='Specify label path')parser.add_argument('--dst_json', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\val.json', help='Specify generated json file name')return parserif __name__ == '__main__':args = make_parser().parse_args()img_root = Path(args.img_path)label_root = Path(args.label_path)category_dict = {1:'Car', 2:'Van', 3:'Pedestrian', 4:'Person_sitting', 5:'Truck',  6:'Cyclist', 7:'Tram'}category_name2id_dict = {v:k for k, v in category_dict.items()}img_list = os.listdir(img_root)img_id = 0anno_id = 0json_images_list = list()json_annotations_list = list()json_categories_list = list()for img_name in tqdm(img_list):img = cv2.imread(str(img_root/img_name))img_height, img_width, _ = img.shapeimg_dict = {'license': None,'file_name': img_name,'coco_url': None,'height': img_height, 'width': img_width, 'date_captured': None, 'flickr_url': None,'id': img_id}json_images_list.append(img_dict)label_name = Path(img_name).with_suffix('.txt')with open(label_root/label_name) as f:anno_list = [x.split() for x in f.read().strip().splitlines()]for anno in anno_list:if anno[0] in category_name2id_dict:bbox = [float(anno[4]), float(anno[5]), float(anno[6])-float(anno[4]), float(anno[7])-float(anno[5])] #   anno[4:8]area = bbox[2]*bbox[3]anno_dict = {'segmentation': None,'area': area,'iscrowd': 0,'image_id': img_id,'bbox': bbox, 'category_id': category_name2id_dict[anno[0]],'id': anno_id}json_annotations_list.append(anno_dict)anno_id += 1img_id += 1for id in category_dict:json_categories_list.append({'supercategory': None,'id': id,'name': category_dict[id]})json_dict = {'images': json_images_list,'annotations': json_annotations_list,'categories': json_categories_list}with open(args.dst_json,"w") as f:json.dump(json_dict,f)

 COCO_vis.py

'''
验证转换后的json格式标注的准确性。
运行命令:python tools/COCO_vis.py --img_root ./KITTI_YOLOX/img/train --label_file ./KITTI_YOLOX/train.json
'''import argparse
from pathlib import Path
import numpy as np
import cv2
from pycocotools.coco import COCO# parser.add_argument('--img_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img\val',
#                     help='Specify img path')
# parser.add_argument('--label_path', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\label\val',
#                     help='Specify label path')
# parser.add_argument('--dst_json', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\val.json',
#                     help='Specify generated json file name')def make_parser():parser = argparse.ArgumentParser("")        parser.add_argument('--img_root', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\KITTI_YOLOX\img\train', help='Specify img path')parser.add_argument('--label_file', type=str, default=r'D:\BaiduNetdiskDownload\CV\KITTI\train.json', help='Specify COCO format label file')return parserif __name__ == '__main__':args = make_parser().parse_args()img_root = args.img_rootanno_file = args.label_filecoco = COCO(anno_file)img_ids = coco.getImgIds()category_list = coco.loadCats(coco.getCatIds())label_id2name = dict([(item['id'], item['name']) for item in category_list])for img_id in img_ids:img_info = coco.loadImgs(img_id)[0]print('img name: ', str(Path(img_root)/img_info['file_name']))img = cv2.imread(str(Path(img_root)/img_info['file_name']))img_width = img_info["width"]img_height = img_info["height"]anno_ids = coco.getAnnIds(imgIds=[img_id], iscrowd=False)result_anno_list = list()for anno_id in anno_ids:annotation = coco.loadAnns(anno_id)x1 = np.max((0, annotation[0]["bbox"][0]))y1 = np.max((0, annotation[0]["bbox"][1]))x2 = np.min((img_width, x1 + np.max((0, annotation[0]["bbox"][2]))))y2 = np.min((img_height, y1 + np.max((0, annotation[0]["bbox"][3]))))label = label_id2name[annotation[0]['category_id']]result_anno_list.append([label, x1, y1, x2, y2])cv2.rectangle(img, (int(x1), int(y1)), (int(x2), int(y2)), (0,0,255), 1)cv2.putText(img, label, (int(x1), int(y1)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (128,255,255))cv2.imshow('img', img)ret = cv2.waitKey(0)    if ret == 27:exit(0)

2.按照训练COCO数据集合的指令训练KITTI即可

python -m yolox.tools.train -n yolox-s -d 1 -b 32 --fp16
或者: python -m yolox.tools.train -f exps/default/yolox_s.py -d 1 -b 32 --fp
16
 olox) xuefei@f123:/mnt/d/work/study/detect/7$
(yolox) xuefei@f123:/mnt/d/work/study/detect/7$ python -m yolox.tools.train  -f exps/kitti_car_detection/yolox_s.py  -d 1 -b 16 --fp16
2024-02-05 23:08:04 | INFO     | yolox.core.trainer:130 - args: Namespace(batch_size=16, cache=False, ckpt=None, devices=1, dist_backend='nccl', dist_url=None, exp_file='exps/kitti_car_detection/yolox_s.py', experiment_name='yolox_s', fp16=True, logger='tensorboard', machine_rank=0, name=None, num_machines=1, occupy=False, opts=[], resume=False, start_epoch=None)
2024-02-05 23:08:04 | INFO     | yolox.core.trainer:131 - exp value:
╒═══════════════════╤═══════════════════════════════════════════════════════════════╕
│ keys              │ values                                                        │
╞═══════════════════╪═══════════════════════════════════════════════════════════════╡
│ seed              │ None                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ output_dir        │ './YOLOX_outputs'                                             │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ print_interval    │ 10                                                            │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ eval_interval     │ 10                                                            │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ num_classes       │ 7                                                             │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ depth             │ 0.33                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ width             │ 0.5                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ act               │ 'silu'                                                        │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ data_num_workers  │ 16                                                            │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ input_size        │ (256, 832)                                                    │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ multiscale_range  │ 5                                                             │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ data_dir          │ '/mnt/d/BaiduNetdiskDownload/CV/KITTI/KITTI_YOLOX/img/'       │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ train_ann         │ '/mnt/d/BaiduNetdiskDownload/CV/KITTI/KITTI_YOLOX/train.json' │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ val_ann           │ '/mnt/d/BaiduNetdiskDownload/CV/KITTI/KITTI_YOLOX/val.json'   │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ test_ann          │ '/mnt/d/BaiduNetdiskDownload/CV/KITTI/KITTI_YOLOX/test.json'  │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ mosaic_prob       │ 1.0                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ mixup_prob        │ 1.0                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ hsv_prob          │ 1.0                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ flip_prob         │ 0.5                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ degrees           │ 10.0                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ translate         │ 0.1                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ mosaic_scale      │ (0.1, 2)                                                      │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ enable_mixup      │ True                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ mixup_scale       │ (0.5, 1.5)                                                    │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ shear             │ 2.0                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ warmup_epochs     │ 5                                                             │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ max_epoch         │ 300                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ warmup_lr         │ 0                                                             │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ min_lr_ratio      │ 0.05                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ basic_lr_per_img  │ 0.00015625                                                    │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ scheduler         │ 'yoloxwarmcos'                                                │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ no_aug_epochs     │ 80                                                            │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ ema               │ True                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ weight_decay      │ 0.0005                                                        │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ momentum          │ 0.9                                                           │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ save_history_ckpt │ True                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ exp_name          │ 'yolox_s'                                                     │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ test_size         │ (256, 832)                                                    │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ test_conf         │ 0.01                                                          │
├───────────────────┼───────────────────────────────────────────────────────────────┤
│ nmsthre           │ 0.65                                                          │
╘═══════════════════╧═══════════════════════════════════════════════════════════════╛
2024-02-05 23:08:05 | INFO     | yolox.core.trainer:137 - Model Summary: Params: 8.94M, Gflops: 13.92
2024-02-05 23:08:07 | INFO     | yolox.data.datasets.kitti:64 - loading annotations into memory...
2024-02-05 23:08:07 | INFO     | yolox.data.datasets.kitti:64 - Done (t=0.05s)
2024-02-05 23:08:07 | INFO     | pycocotools.coco:86 - creating index...
2024-02-05 23:08:07 | INFO     | pycocotools.coco:86 - index created!
2024-02-05 23:08:08 | INFO     | yolox.core.trainer:155 - init prefetcher, this might take one minute or less...
2024-02-05 23:08:17 | INFO     | yolox.data.datasets.kitti:64 - loading annotations into memory...
2024-02-05 23:08:17 | INFO     | yolox.data.datasets.kitti:64 - Done (t=0.05s)
2024-02-05 23:08:17 | INFO     | pycocotools.coco:86 - creating index...
2024-02-05 23:08:17 | INFO     | pycocotools.coco:86 - index created!
2024-02-05 23:08:17 | INFO     | yolox.core.trainer:191 - Training start...
2024-02-05 23:08:17 | INFO     | yolox.core.trainer:192 -
YOLOX((backbone): YOLOPAFPN((backbone): CSPDarknet((stem): Focus((conv): BaseConv((conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(dark2): Sequential((0): BaseConv((conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): CSPLayer((conv1): BaseConv((conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))))(dark3): Sequential((0): BaseConv((conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): CSPLayer((conv1): BaseConv((conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(1): Bottleneck((conv1): BaseConv((conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(2): Bottleneck((conv1): BaseConv((conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))))(dark4): Sequential((0): BaseConv((conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): CSPLayer((conv1): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(1): Bottleneck((conv1): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(2): Bottleneck((conv1): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))))(dark5): Sequential((0): BaseConv((conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): SPPBottleneck((conv1): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): ModuleList((0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)(1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)(2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False))(conv2): BaseConv((conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(2): CSPLayer((conv1): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))))))(upsample): Upsample(scale_factor=2.0, mode=nearest)(lateral_conv0): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(C3_p4): CSPLayer((conv1): BaseConv((conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))))(reduce_conv1): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(C3_p3): CSPLayer((conv1): BaseConv((conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))))(bu_conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(C3_n3): CSPLayer((conv1): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))))(bu_conv1): BaseConv((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(C3_n4): CSPLayer((conv1): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv3): BaseConv((conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(m): Sequential((0): Bottleneck((conv1): BaseConv((conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(conv2): BaseConv((conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))))(head): YOLOXHead((cls_convs): ModuleList((0): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(1): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(2): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))(reg_convs): ModuleList((0): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(1): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(2): Sequential((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))))(cls_preds): ModuleList((0): Conv2d(128, 7, kernel_size=(1, 1), stride=(1, 1))(1): Conv2d(128, 7, kernel_size=(1, 1), stride=(1, 1))(2): Conv2d(128, 7, kernel_size=(1, 1), stride=(1, 1)))(reg_preds): ModuleList((0): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))(1): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1))(2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1)))(obj_preds): ModuleList((0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))(1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))(2): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1)))(stems): ModuleList((0): BaseConv((conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(1): BaseConv((conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True))(2): BaseConv((conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)(act): SiLU(inplace=True)))(l1_loss): L1Loss()(bcewithlog_loss): BCEWithLogitsLoss()(iou_loss): IOUloss())
)
2024-02-05 23:08:17 | INFO     | yolox.core.trainer:203 - ---> start train epoch1
2024-02-05 23:08:22 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 10/250, mem: 2730Mb, iter_time: 0.523s, data_time: 0.001s, total_loss: 17.5, iou_loss: 4.7, l1_loss: 3.0, conf_loss: 8.8, cls_loss: 1.0, lr: 1.600e-07, size: 256, ETA: 10:54:05
2024-02-05 23:08:27 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 20/250, mem: 2730Mb, iter_time: 0.478s, data_time: 0.223s, total_loss: 13.0, iou_loss: 4.7, l1_loss: 2.3, conf_loss: 5.1, cls_loss: 1.0, lr: 6.400e-07, size: 96, ETA: 10:25:36
2024-02-05 23:08:37 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 30/250, mem: 4257Mb, iter_time: 0.950s, data_time: 0.001s, total_loss: 22.2, iou_loss: 4.7, l1_loss: 3.0, conf_loss: 13.5, cls_loss: 1.0, lr: 1.440e-06, size: 416, ETA: 13:32:40
2024-02-05 23:08:43 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 40/250, mem: 4259Mb, iter_time: 0.676s, data_time: 0.001s, total_loss: 21.0, iou_loss: 4.7, l1_loss: 3.0, conf_loss: 12.3, cls_loss: 1.0, lr: 2.560e-06, size: 416, ETA: 13:40:40
2024-02-05 23:08:45 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 50/250, mem: 4259Mb, iter_time: 0.210s, data_time: 0.001s, total_loss: 13.2, iou_loss: 4.8, l1_loss: 2.6, conf_loss: 5.0, cls_loss: 0.8, lr: 4.000e-06, size: 96, ETA: 11:48:55
2024-02-05 23:08:52 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 60/250, mem: 4259Mb, iter_time: 0.646s, data_time: 0.001s, total_loss: 18.1, iou_loss: 4.7, l1_loss: 2.6, conf_loss: 9.8, cls_loss: 1.0, lr: 5.760e-06, size: 320, ETA: 12:05:09
2024-02-05 23:08:59 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 70/250, mem: 4279Mb, iter_time: 0.714s, data_time: 0.027s, total_loss: 20.1, iou_loss: 4.7, l1_loss: 2.7, conf_loss: 11.7, cls_loss: 1.0, lr: 7.840e-06, size: 416, ETA: 12:28:52
2024-02-05 23:09:04 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 80/250, mem: 4279Mb, iter_time: 0.478s, data_time: 0.047s, total_loss: 14.9, iou_loss: 4.7, l1_loss: 2.8, conf_loss: 6.4, cls_loss: 1.1, lr: 1.024e-05, size: 224, ETA: 12:09:45
2024-02-05 23:09:06 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 90/250, mem: 4279Mb, iter_time: 0.184s, data_time: 0.001s, total_loss: 12.9, iou_loss: 4.7, l1_loss: 2.2, conf_loss: 5.1, cls_loss: 0.9, lr: 1.296e-05, size: 96, ETA: 11:14:07
2024-02-05 23:09:15 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 100/250, mem: 4279Mb, iter_time: 0.949s, data_time: 0.259s, total_loss: 20.1, iou_loss: 4.7, l1_loss: 2.7, conf_loss: 11.7, cls_loss: 1.1, lr: 1.600e-05, size: 352, ETA: 12:05:03
2024-02-05 23:09:18 | INFO     | yolox.core.trainer:261 - epoch: 1/300, iter: 110/250, mem: 4279Mb, iter_time: 0.248s, data_time: 0.001s, total_loss: 13.7, iou_loss: 4.7, l1_loss: 2.6, conf_loss: 5.4, cls_loss: 1.0, lr: 1.936e-05, size: 128, ETA: 11:27:11

这篇关于使用 KITTI数据集训练YOLOX的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/683488

相关文章

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

关于数据埋点,你需要了解这些基本知识

产品汪每天都在和数据打交道,你知道数据来自哪里吗? 移动app端内的用户行为数据大多来自埋点,了解一些埋点知识,能和数据分析师、技术侃大山,参与到前期的数据采集,更重要是让最终的埋点数据能为我所用,否则可怜巴巴等上几个月是常有的事。   埋点类型 根据埋点方式,可以区分为: 手动埋点半自动埋点全自动埋点 秉承“任何事物都有两面性”的道理:自动程度高的,能解决通用统计,便于统一化管理,但个性化定

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

异构存储(冷热数据分离)

异构存储主要解决不同的数据,存储在不同类型的硬盘中,达到最佳性能的问题。 异构存储Shell操作 (1)查看当前有哪些存储策略可以用 [lytfly@hadoop102 hadoop-3.1.4]$ hdfs storagepolicies -listPolicies (2)为指定路径(数据存储目录)设置指定的存储策略 hdfs storagepolicies -setStoragePo

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传