Learning‐Based Animation of Clothing for Virtual Try‐On. Santesteban, Igor et al. EG2019

本文主要是介绍Learning‐Based Animation of Clothing for Virtual Try‐On. Santesteban, Igor et al. EG2019,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Santesteban, Igor, Miguel A. Otaduy, and Dan Casas. “Learning‐Based Animation of Clothing for Virtual Try‐On.” Computer Graphics Forum. Vol. 38. No. 2. 2019.

1. Motivation

Tight garments follow the deformation of the body.

Inspired by Pose Space Deformation (PSD) and subsequent human body models.

Notification

M b M_b Mb a deformed human body mesh with shape parameters β \beta β and pose parameters θ \theta θ (joint angles).

M c M_c Mc a deformed garment mesh worn by the human body mesh

S c ( β , θ ) S_c(\beta,\theta) Sc(β,θ) the simulated garment result on a body shape with β \beta β and θ \theta θ . This is our goal.

2. Main work

2.1 Cloth model
  • Body deformation model(PSD)

    M b ( β , θ ) = W ( T b ( β , θ ) , β , θ , W b ) M_b(\beta,\theta) = W(T_b(\beta,\theta),\beta,\theta,W_b) Mb(β,θ)=W(Tb(β,θ),β,θ,Wb)

    in which W ( ⋅ ) W(·) W() is a skinning function,

    T b ( β , θ ) ∈ R 3 × V b T_b(\beta,\theta)\in R^{3×V_b} Tb(β,θ)R3×Vb is an unposed body mesh including V b V_b Vb vertices,

    T b ( β , θ ) T_b(\beta,\theta) Tb(β,θ) may be obtained by deforming a template body mesh T ‾ b \overline{T}_b Tb to account for body shape and pose-based surface corrections (See, e.g., [LMR∗15]) .

  • Similar cloth deformation pipeline

    T ‾ c ∈ R 3 × V c \overline{T}_c\in R^{3×V_c} TcR3×Vc is a template cloth mesh including V c V_c Vc vertices

    1.compute an unposed cloth mesh T c ( β , θ ) T_c(\beta,\theta) Tc(β,θ)

    T c ( β , θ ) = T ‾ c + R G ( β ) + R L ( β , θ ) T_c(\beta,\theta)=\overline{T}_c +R_G(\beta) + R_L(\beta,\theta) Tc(β,θ)=Tc+RG(β)+RL(β,θ)

    R G ( ) R_G() RG() and R L ( ) R_L() RL() represent two nonlinear regressors, which
    take as input body shape parameters and shape and pose parameters

    2.use the skinning function W ( ⋅ ) W(·) W() to produce the full cloth deformation

    M c ( β , θ ) = W ( T c ( β , θ ) , β , θ , W c ) M_c(\beta,\theta) = W(T_c(\beta,\theta),\beta,\theta,W_c) Mc(β,θ)=W(Tc(β,θ),β,θ,Wc)

    the skinning weight matrix W c W_c Wc by projecting each vertex of the
    template cloth mesh onto the closest triangle of the template body
    mesh, and interpolating the body skinning weights W b W_b Wb.

    3.postprocessing step to get collision-free cloth outputs by pushing them outside
    their closest body primitive.
    在这里插入图片描述

2.2 Garment Fit Regressor (static)

A nonlinear regressor R G : R ∣ β ∣ → R 3 × V c R_G: R^{|\beta|}\rightarrow R^{3×V_c} RG:RβR3×Vc

Input: the shape of the body β \beta β

output: per-vertex displacements Δ G \Delta_G ΔG

Ground-truth: Δ G G T = ρ ( S c ( β , 0 ) ) − T ‾ c \Delta^{GT}_G=\rho(S_c(\beta,0))-\overline{T}_c ΔGGT=ρ(Sc(β,0))Tc

S c ( β , 0 ) S_c(\beta,0) Sc(β,0)represents a simulation of the garment on a body with shape β \beta β and pose θ = 0 \theta=0 θ=0

ρ \rho ρ represents a smoothing operator.

Function: a single-hidden-layer multilayer perceptron (MLP) neural network

Loss: MSE

2.3 Garment Wrinkle Regressor (dynamic)

A nonlinear regressor R L : R ∣ β ∣ + ∣ θ ∣ → R 3 × V c R_L: R^{|\beta|+|\theta|}\rightarrow R^{3×V_c} RL:Rβ+θR3×Vc

Input: shape β \beta β and pose θ \theta θ

output: per-vertex displacements Δ L \Delta_L ΔL

Ground-truth: Δ L G T = W − 1 ( S c ( β , θ ) , β , θ , W c ) − T ‾ c − Δ G \Delta^{GT}_L=W^{-1}(S_c(\beta,\theta),\beta,\theta,W_c)-\overline{T}_c- \Delta_G ΔLGT=W1(Sc(β,θ),β,θ,Wc)TcΔG

GT represents the deviation between the simulated cloth worn by the moving body, which is expressed in the body’s rest pose.

Function: a Recurrent Neural Network (RNN) based on Gated Recurrent Units (GRU)

Loss: MSE

2.4 Dataset
  • one garment

  • parametric human model(SMPL) including 17 training body shapes: for each of 4 principal components of β \beta β, generate 4 samples + the nominal shape with β = 0 \beta=0 β=0.

  • animation: 56 sequences character motions from the CMU dataset which contain 7117 frames in total(at 30 fps, downsampled from the original CMU dataset of 120 fps). Simulate each of the 56 sequences for each of the 17 body shapes, wearing the same garment mesh( a T-shirt with 8710 triangles).

  • ARCSim:

    [parameters]

    material: an interlock knit with 60% cotton and 40% polyester

    time step: 3.33ms (0.0033s)

    store step: 10 time steps

    120989 output frames

    • How to get a collision-free initial state for simulation?

      1.manually pre-position the garment mesh once on the template
      body mesh T ‾ b \overline{T}_b Tb.

      2.run the simulation to let the cloth relax, and thus define the initial state for all subsequent simulations.

      3.apply a smoothing operator ρ ( ⋅ ) \rho(·) ρ() to this initial state to obtain the
      template cloth mesh T ‾ c \overline{T}_c Tc.

  • ground-truth garment fit data

    interpolate the shape parameters from the template body mesh to the target shape, while simulate the garment from its collision-free initial state. Once the body reaches its target shape, let the cloth rest, and compute the GT garment fit displacements.

  • ground-truth garment wrinkle data

    interpolate both shape and pose parameters from the template body mesh to the shape and initial pose of the animation. let the cloth rest.

2.5 Network Implementation and Training

Tensorflow.

MLP for garment fit regression contains a single hidden layer with 20 hidden neurons.

The GRU network for garment wrinkle regression contains a single hidden layer with 1500 hidden neurons.

Dropout regularization. Randomly disable 20% of the hidden neurons
on each optimization step.

Adam for 2000 epochs with an initial learning rate of 0.0001.

Train each network seperatly.

  • The garment fit MLP network

    training the ground-truth data from all 17 body shapes.

  • The garment wrinkle GRU network ()

    52 animation sequences for training

    4 sequences for testing.

    batchsize 128.

    speed-up: the error gradient using Truncated Backpropagation Through Time
    (TBPTT), with a limit of 90 time steps. LSTM中用到。

3. Evaluation

  • Runtime Performance
    在这里插入图片描述

  • Quantitative Evaluation
    Linear vs. nonlinear regression
    在这里插入图片描述
    Generalization to new body shapes
    Generalization to new body poses: CMU sequences 01_01 and 55_27

  • Qualitative Evaluation
    在这里插入图片描述

    clothing deformations produced by our approach on a static pose while
    changing the body shape over time.

在这里插入图片描述

DRAPE approximates the deformation of the garment by scaling it such that it fits the target shape, which produces plausible but unrealistic results. In contrast, our method deforms the garment in a realistic manner.
在这里插入图片描述
ClothCap’s retargeting lacks realism because cloth deformations are simply copied across different shapes. In contrast, our method produces realistic pose- and shape-dependent deformations.

  • Generalization to new poses (01_01)

在这里插入图片描述

4. Future work

  • Generalize to multiple garments(with different materials)

  • Collisions(add low-level collision constraints into Network)

  • Smooths excessively high-frequency wrinkles

  • Loose garments (dress).

这篇关于Learning‐Based Animation of Clothing for Virtual Try‐On. Santesteban, Igor et al. EG2019的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/741060

相关文章

VMWare报错“指定的文件不是虚拟磁盘“或“The file specified is not a virtual disk”问题

《VMWare报错“指定的文件不是虚拟磁盘“或“Thefilespecifiedisnotavirtualdisk”问题》文章描述了如何修复VMware虚拟机中出现的“指定的文件不是虚拟... 目录VMWare报错“指定的文件不是虚拟磁盘“或“The file specified is not a virt

Retrieval-based-Voice-Conversion-WebUI模型构建指南

一、模型介绍 Retrieval-based-Voice-Conversion-WebUI(简称 RVC)模型是一个基于 VITS(Variational Inference with adversarial learning for end-to-end Text-to-Speech)的简单易用的语音转换框架。 具有以下特点 简单易用:RVC 模型通过简单易用的网页界面,使得用户无需深入了

C++第四十七弹---深入理解异常机制:try, catch, throw全面解析

✨个人主页: 熬夜学编程的小林 💗系列专栏: 【C语言详解】 【数据结构详解】【C++详解】 目录 1.C语言传统的处理错误的方式 2.C++异常概念 3. 异常的使用 3.1 异常的抛出和捕获 3.2 异常的重新抛出 3.3 异常安全 3.4 异常规范 4.自定义异常体系 5.C++标准库的异常体系 1.C语言传统的处理错误的方式 传统的错误处理机制:

简单的Q-learning|小明的一维世界(3)

简单的Q-learning|小明的一维世界(1) 简单的Q-learning|小明的一维世界(2) 一维的加速度世界 这个世界,小明只能控制自己的加速度,并且只能对加速度进行如下三种操作:增加1、减少1、或者不变。所以行动空间为: { u 1 = − 1 , u 2 = 0 , u 3 = 1 } \{u_1=-1, u_2=0, u_3=1\} {u1​=−1,u2​=0,u3​=1}

简单的Q-learning|小明的一维世界(2)

上篇介绍了小明的一维世界模型 、Q-learning的状态空间、行动空间、奖励函数、Q-table、Q table更新公式、以及从Q值导出策略的公式等。最后给出最简单的一维位置世界的Q-learning例子,从给出其状态空间、行动空间、以及稠密与稀疏两种奖励函数的设置方式。下面将继续深入,GO! 一维的速度世界 这个世界,小明只能控制自己的速度,并且只能对速度进行如下三种操作:增加1、减

【前端】animation动画以及利用vue制作简单的透明度改变动画,包含vue生命周期实现

一. 问题描述 想做一个文字透明度从1到0然后再从0到1的css动画。 二. 代码写法 2.1 animation写法 2.1.1 animation属性key 2.1.2 代码展示 <!DOCTYPE html><html lang="en"><head><meta charset="UTF-8"><meta name="viewport" content="width=de

try -catch-finally的理解,同时在try-catch-finally中含有return和throws的理解

在没有try-catch或try-catch-finally的情况下,程序正常执行到某行,在这行报错后,这行后面的代码就不执行了,程序就停止了,中断了。 例如   在有try-catch或try-catch-finally 情况上,在某行执行错误,在try中这行下的代码不执行,try外的代码执行。当然是catch在没有做处理的情况下。如果catch中做了处理,在不影响当前程序下,try

Android 属性动画(Property Animation)

本文是学习以下三位大神之后,整理的学习笔记,彩蛋在编号6          http://blog.csdn.net/lmj623565791/article/details/38067475          http://www.cnblogs.com/angeldevil/archive/2011/12/02/2271096.html          http://www.tu

MACS bdgdiff: Differential peak detection based on paired four bedGraph files.

参考原文地址:[http://manpages.ubuntu.com/manpages/xenial/man1/macs2_bdgdiff.1.html](http://manpages.ubuntu.com/manpages/xenial/man1/macs2_bdgdiff.1.html) 文章目录 一、MACS bdgdiff 简介DESCRIPTION 二、用法

Neighborhood Homophily-based Graph Convolutional Network

#paper/ccfB 推荐指数: #paper/⭐ #pp/图结构学习 流程 重定义同配性指标: N H i k = ∣ N ( i , k , c m a x ) ∣ ∣ N ( i , k ) ∣ with c m a x = arg ⁡ max ⁡ c ∈ [ 1 , C ] ∣ N ( i , k , c ) ∣ NH_i^k=\frac{|\mathcal{N}(i,k,c_{