9月5日关键点检测学习笔记——人体骨骼点检测:自底向上

2023-11-10 14:59

本文主要是介绍9月5日关键点检测学习笔记——人体骨骼点检测:自底向上,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

文章目录

  • 前言
  • 一、堆叠沙漏网络 Stacked Hourglass Networks
    • 1、Hourglass Module
    • 2、Heat map
  • 二、自顶向下的问题
  • 三、自底向上
    • 1、OpenPose
  • 四、OpenPose 实战


前言

本文为9月5日关键点检测学习笔记——人体骨骼点检测:自底向上,分为四个章节:

  • 堆叠沙漏网络 Stacked Hourglass Networks;
  • 自顶向下的问题;
  • 自底向上;
  • OpenPose 实战。

一、堆叠沙漏网络 Stacked Hourglass Networks

1

2

1、Hourglass Module

2

2、Heat map

3


二、自顶向下的问题

  1. 速度慢;
  2. Temporal occlusion 目标遮挡。

三、自底向上

1、OpenPose

4
5
每个 confidence map 是一个灰度图,其最大值的位置坐标即为对应人体某个概率最高的关键点。

6
另一个分支预测 part affinities 的 2D 向量场,表示两个关键点间的关联度。

  • Parts detection:

7

  • Parts association:
    8
    9
    L c , k ∗ ( p ) = { v i f p o n l i m b c , k 0 o t h e r w i s e v = ( x j 2 , k − x j 1 , k ) ∣ ∣ x j 2 , k − x j 1 , k ∣ ∣ 2 0 ≤ v ⋅ ( p − x j i , k ) ≤ l c , k a n d ∣ v ⊥ ⋅ ( p − x j i , k ) ∣ ≤ σ l \textbf{L}^*_{c, k}(p) = \left\{\begin{matrix} \textbf{v} \quad if\ \textbf{p}\ on\ limb\ c, k \\ \textbf{0} \quad otherwise \end{matrix}\right. \\ \textbf{v} = \frac{(\textbf{x}_{j_2, k} - \textbf{x}_{j_1, k})}{||\textbf{x}_{j_2, k} - \textbf{x}_{j_1, k}||_2}\\ 0 \le \textbf{v}\cdot (\textbf{p} - \textbf{x}_{j_i, k} ) \le l_{c, k} \quad and \quad |\textbf{v}_\perp \cdot (\textbf{p} - \textbf{x}_{j_i, k}) | \le \sigma _l Lc,k(p)={vif p on limb c,k0otherwisev=∣∣xj2,kxj1,k2(xj2,kxj1,k)0v(pxji,k)lc,kandv(pxji,k)σl
    其中,四肢宽度 limb width σ l \sigma_l σl 为像素宽度,四肢长度: l c , k = ∣ ∣ x j 2 , k − x j 1 , k ∣ ∣ 2 l_{c, k} = ||\textbf{x}_{j_2, k} - \textbf{x}_{j_1, k}||_2 lc,k=∣∣xj2,kxj1,k2 v ⊥ \textbf{v}_{\perp} v v \textbf{v} v 的垂直向量。

四、OpenPose 实战

train_VGG19.py 代码如下:

import argparse
import time
import os
import numpy as np
from collections import OrderedDictimport torch
import torch.nn as nn
from torch.optim.lr_scheduler import ReduceLROnPlateaufrom lib.network.rtpose_vgg import get_model, use_vgg
from lib.datasets import coco, transforms, datasets
from lib.config import update_configDATA_DIR = '/data/coco'ANNOTATIONS_TRAIN = [os.path.join(DATA_DIR, 'annotations', item) for item in ['person_keypoints_train2017.json']]
ANNOTATIONS_VAL = os.path.join(DATA_DIR, 'annotations', 'person_keypoints_val2017.json')
IMAGE_DIR_TRAIN = os.path.join(DATA_DIR, 'images/train2017')
IMAGE_DIR_VAL = os.path.join(DATA_DIR, 'images/val2017')def train_cli(parser):group = parser.add_argument_group('dataset and loader')group.add_argument('--train-annotations', default=ANNOTATIONS_TRAIN)group.add_argument('--train-image-dir', default=IMAGE_DIR_TRAIN)group.add_argument('--val-annotations', default=ANNOTATIONS_VAL)group.add_argument('--val-image-dir', default=IMAGE_DIR_VAL)group.add_argument('--pre-n-images', default=8000, type=int,help='number of images to sampe for pretraining')group.add_argument('--n-images', default=None, type=int,help='number of images to sample')group.add_argument('--duplicate-data', default=None, type=int,help='duplicate data')group.add_argument('--loader-workers', default=8, type=int,help='number of workers for data loading')group.add_argument('--batch-size', default=72, type=int,help='batch size')group.add_argument('--lr', '--learning-rate', default=1., type=float,metavar='LR', help='initial learning rate')group.add_argument('--momentum', default=0.9, type=float, metavar='M',help='momentum')group.add_argument('--weight-decay', '--wd', default=0.000, type=float,metavar='W', help='weight decay (default: 1e-4)') group.add_argument('--nesterov', dest='nesterov', default=True, type=bool)     group.add_argument('--print_freq', default=20, type=int, metavar='N',help='number of iterations to print the training statistics')    def train_factory(args, preprocess, target_transforms):train_datas = [datasets.CocoKeypoints(root=args.train_image_dir,annFile=item,preprocess=preprocess,image_transform=transforms.image_transform_train,target_transforms=target_transforms,n_images=args.n_images,) for item in args.train_annotations]train_data = torch.utils.data.ConcatDataset(train_datas)train_loader = torch.utils.data.DataLoader(train_data, batch_size=args.batch_size, shuffle=True,pin_memory=args.pin_memory, num_workers=args.loader_workers, drop_last=True)val_data = datasets.CocoKeypoints(root=args.val_image_dir,annFile=args.val_annotations,preprocess=preprocess,image_transform=transforms.image_transform_train,target_transforms=target_transforms,n_images=args.n_images,)val_loader = torch.utils.data.DataLoader(val_data, batch_size=args.batch_size, shuffle=False,pin_memory=args.pin_memory, num_workers=args.loader_workers, drop_last=True)return train_loader, val_loader, train_data, val_datadef cli():parser = argparse.ArgumentParser(description=__doc__,formatter_class=argparse.ArgumentDefaultsHelpFormatter,)train_cli(parser)parser.add_argument('-o', '--output', default=None,help='output file')parser.add_argument('--stride-apply', default=1, type=int,help='apply and reset gradients every n batches')parser.add_argument('--epochs', default=75, type=int,help='number of epochs to train')parser.add_argument('--freeze-base', default=0, type=int,help='number of epochs to train with frozen base')parser.add_argument('--pre-lr', type=float, default=1e-4,help='pre learning rate')parser.add_argument('--update-batchnorm-runningstatistics',default=False, action='store_true',help='update batch norm running statistics')parser.add_argument('--square-edge', default=368, type=int,help='square edge of input images')parser.add_argument('--ema', default=1e-3, type=float,help='ema decay constant')parser.add_argument('--debug-without-plots', default=False, action='store_true',help='enable debug but dont plot')parser.add_argument('--disable-cuda', action='store_true',help='disable CUDA')                        parser.add_argument('--model_path', default='./network/weight/', type=str, metavar='DIR',help='path to where the model saved')                         args = parser.parse_args()# add args.deviceargs.device = torch.device('cpu')args.pin_memory = Falseif not args.disable_cuda and torch.cuda.is_available():args.device = torch.device('cuda')args.pin_memory = Truereturn argsargs = cli()print("Loading dataset...")
# load train data
preprocess = transforms.Compose([transforms.Normalize(),transforms.RandomApply(transforms.HFlip(), 0.5),transforms.RescaleRelative(),transforms.Crop(args.square_edge),transforms.CenterPad(args.square_edge),])
train_loader, val_loader, train_data, val_data = train_factory(args, preprocess, target_transforms=None)def build_names():names = []for j in range(1, 7):for k in range(1, 3):names.append('loss_stage%d_L%d' % (j, k))return namesdef get_loss(saved_for_loss, heat_temp, vec_temp):names = build_names()saved_for_log = OrderedDict()criterion = nn.MSELoss(reduction='mean').cuda()total_loss = 0for j in range(6):pred1 = saved_for_loss[2 * j]pred2 = saved_for_loss[2 * j + 1] # Compute lossesloss1 = criterion(pred1, vec_temp)loss2 = criterion(pred2, heat_temp) total_loss += loss1total_loss += loss2# print(total_loss)# Get value from Variable and save for logsaved_for_log[names[2 * j]] = loss1.item()saved_for_log[names[2 * j + 1]] = loss2.item()saved_for_log['max_ht'] = torch.max(saved_for_loss[-1].data[:, 0:-1, :, :]).item()saved_for_log['min_ht'] = torch.min(saved_for_loss[-1].data[:, 0:-1, :, :]).item()saved_for_log['max_paf'] = torch.max(saved_for_loss[-2].data).item()saved_for_log['min_paf'] = torch.min(saved_for_loss[-2].data).item()return total_loss, saved_for_logdef train(train_loader, model, optimizer, epoch):batch_time = AverageMeter()data_time = AverageMeter()losses = AverageMeter()meter_dict = {}for name in build_names():meter_dict[name] = AverageMeter()meter_dict['max_ht'] = AverageMeter()meter_dict['min_ht'] = AverageMeter()    meter_dict['max_paf'] = AverageMeter()    meter_dict['min_paf'] = AverageMeter()# switch to train modemodel.train()end = time.time()for i, (img, heatmap_target, paf_target) in enumerate(train_loader):# measure data loading time#writer.add_text('Text', 'text logged at step:' + str(i), i)#for name, param in model.named_parameters():#    writer.add_histogram(name, param.clone().cpu().data.numpy(),i)        data_time.update(time.time() - end)img = img.cuda()heatmap_target = heatmap_target.cuda()paf_target = paf_target.cuda()# compute output_,saved_for_loss = model(img)total_loss, saved_for_log = get_loss(saved_for_loss, heatmap_target, paf_target)for name,_ in meter_dict.items():meter_dict[name].update(saved_for_log[name], img.size(0))losses.update(total_loss, img.size(0))# compute gradient and do SGD stepoptimizer.zero_grad()total_loss.backward()optimizer.step()# measure elapsed timebatch_time.update(time.time() - end)end = time.time()if i % args.print_freq == 0:print_string = 'Epoch: [{0}][{1}/{2}]\t'.format(epoch, i, len(train_loader))print_string +='Data time {data_time.val:.3f} ({data_time.avg:.3f})\t'.format( data_time=data_time)print_string += 'Loss {loss.val:.4f} ({loss.avg:.4f})'.format(loss=losses)for name, value in meter_dict.items():print_string+='{name}: {loss.val:.4f} ({loss.avg:.4f})\t'.format(name=name, loss=value)print(print_string)return losses.avg  def validate(val_loader, model, epoch):batch_time = AverageMeter()data_time = AverageMeter()losses = AverageMeter()meter_dict = {}for name in build_names():meter_dict[name] = AverageMeter()meter_dict['max_ht'] = AverageMeter()meter_dict['min_ht'] = AverageMeter()    meter_dict['max_paf'] = AverageMeter()    meter_dict['min_paf'] = AverageMeter()# switch to train modemodel.eval()end = time.time()for i, (img, heatmap_target, paf_target) in enumerate(val_loader):# measure data loading timedata_time.update(time.time() - end)img = img.cuda()heatmap_target = heatmap_target.cuda()paf_target = paf_target.cuda()# compute output_,saved_for_loss = model(img)total_loss, saved_for_log = get_loss(saved_for_loss, heatmap_target, paf_target)#for name,_ in meter_dict.items():#    meter_dict[name].update(saved_for_log[name], img.size(0))losses.update(total_loss.item(), img.size(0))# measure elapsed timebatch_time.update(time.time() - end)end = time.time()  if i % args.print_freq == 0:print_string = 'Epoch: [{0}][{1}/{2}]\t'.format(epoch, i, len(val_loader))print_string +='Data time {data_time.val:.3f} ({data_time.avg:.3f})\t'.format( data_time=data_time)print_string += 'Loss {loss.val:.4f} ({loss.avg:.4f})'.format(loss=losses)for name, value in meter_dict.items():print_string+='{name}: {loss.val:.4f} ({loss.avg:.4f})\t'.format(name=name, loss=value)print(print_string)return losses.avgclass AverageMeter(object):"""Computes and stores the average and current value"""def __init__(self):self.reset()def reset(self):self.val = 0self.avg = 0self.sum = 0self.count = 0def update(self, val, n=1):self.val = valself.sum += val * nself.count += nself.avg = self.sum / self.count# model
model = get_model(trunk='vgg19')
model = torch.nn.DataParallel(model).cuda()
# load pretrained
use_vgg(model)# Fix the VGG weights first, and then the weights will be released
for i in range(20):for param in model.module.model0[i].parameters():param.requires_grad = Falsetrainable_vars = [param for param in model.parameters() if param.requires_grad]
optimizer = torch.optim.SGD(trainable_vars, lr=args.lr,momentum=args.momentum,weight_decay=args.weight_decay,nesterov=args.nesterov)     for epoch in range(5):# train for one epochtrain_loss = train(train_loader, model, optimizer, epoch)# evaluate on validation setval_loss = validate(val_loader, model, epoch)  # Release all weights                                   
for param in model.module.parameters():param.requires_grad = Truetrainable_vars = [param for param in model.parameters() if param.requires_grad]
optimizer = torch.optim.SGD(trainable_vars, lr=args.lr,momentum=args.momentum,weight_decay=args.weight_decay,nesterov=args.nesterov)          lr_scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.8, patience=5, verbose=True, threshold=0.0001, threshold_mode='rel', cooldown=3, min_lr=0, eps=1e-08)best_val_loss = np.infmodel_save_filename = './network/weight/best_pose.pth'
for epoch in range(5, args.epochs):# train for one epochtrain_loss = train(train_loader, model, optimizer, epoch)# evaluate on validation setval_loss = validate(val_loader, model, epoch)   lr_scheduler.step(val_loss)                        is_best = val_loss<best_val_lossbest_val_loss = min(val_loss, best_val_loss)if is_best:torch.save(model.state_dict(), model_save_filename)      

这篇关于9月5日关键点检测学习笔记——人体骨骼点检测:自底向上的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/383436

相关文章

HarmonyOS学习(七)——UI(五)常用布局总结

自适应布局 1.1、线性布局(LinearLayout) 通过线性容器Row和Column实现线性布局。Column容器内的子组件按照垂直方向排列,Row组件中的子组件按照水平方向排列。 属性说明space通过space参数设置主轴上子组件的间距,达到各子组件在排列上的等间距效果alignItems设置子组件在交叉轴上的对齐方式,且在各类尺寸屏幕上表现一致,其中交叉轴为垂直时,取值为Vert

Ilya-AI分享的他在OpenAI学习到的15个提示工程技巧

Ilya(不是本人,claude AI)在社交媒体上分享了他在OpenAI学习到的15个Prompt撰写技巧。 以下是详细的内容: 提示精确化:在编写提示时,力求表达清晰准确。清楚地阐述任务需求和概念定义至关重要。例:不用"分析文本",而用"判断这段话的情感倾向:积极、消极还是中性"。 快速迭代:善于快速连续调整提示。熟练的提示工程师能够灵活地进行多轮优化。例:从"总结文章"到"用

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

学习hash总结

2014/1/29/   最近刚开始学hash,名字很陌生,但是hash的思想却很熟悉,以前早就做过此类的题,但是不知道这就是hash思想而已,说白了hash就是一个映射,往往灵活利用数组的下标来实现算法,hash的作用:1、判重;2、统计次数;

综合安防管理平台LntonAIServer视频监控汇聚抖动检测算法优势

LntonAIServer视频质量诊断功能中的抖动检测是一个专门针对视频稳定性进行分析的功能。抖动通常是指视频帧之间的不必要运动,这种运动可能是由于摄像机的移动、传输中的错误或编解码问题导致的。抖动检测对于确保视频内容的平滑性和观看体验至关重要。 优势 1. 提高图像质量 - 清晰度提升:减少抖动,提高图像的清晰度和细节表现力,使得监控画面更加真实可信。 - 细节增强:在低光条件下,抖

零基础学习Redis(10) -- zset类型命令使用

zset是有序集合,内部除了存储元素外,还会存储一个score,存储在zset中的元素会按照score的大小升序排列,不同元素的score可以重复,score相同的元素会按照元素的字典序排列。 1. zset常用命令 1.1 zadd  zadd key [NX | XX] [GT | LT]   [CH] [INCR] score member [score member ...]

【机器学习】高斯过程的基本概念和应用领域以及在python中的实例

引言 高斯过程(Gaussian Process,简称GP)是一种概率模型,用于描述一组随机变量的联合概率分布,其中任何一个有限维度的子集都具有高斯分布 文章目录 引言一、高斯过程1.1 基本定义1.1.1 随机过程1.1.2 高斯分布 1.2 高斯过程的特性1.2.1 联合高斯性1.2.2 均值函数1.2.3 协方差函数(或核函数) 1.3 核函数1.4 高斯过程回归(Gauss

烟火目标检测数据集 7800张 烟火检测 带标注 voc yolo

一个包含7800张带标注图像的数据集,专门用于烟火目标检测,是一个非常有价值的资源,尤其对于那些致力于公共安全、事件管理和烟花表演监控等领域的人士而言。下面是对此数据集的一个详细介绍: 数据集名称:烟火目标检测数据集 数据集规模: 图片数量:7800张类别:主要包含烟火类目标,可能还包括其他相关类别,如烟火发射装置、背景等。格式:图像文件通常为JPEG或PNG格式;标注文件可能为X

【学习笔记】 陈强-机器学习-Python-Ch15 人工神经网络(1)sklearn

系列文章目录 监督学习:参数方法 【学习笔记】 陈强-机器学习-Python-Ch4 线性回归 【学习笔记】 陈强-机器学习-Python-Ch5 逻辑回归 【课后题练习】 陈强-机器学习-Python-Ch5 逻辑回归(SAheart.csv) 【学习笔记】 陈强-机器学习-Python-Ch6 多项逻辑回归 【学习笔记 及 课后题练习】 陈强-机器学习-Python-Ch7 判别分析 【学

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识