React + three.js 实现人脸动捕与3D模型表情同步

本文主要是介绍React + three.js 实现人脸动捕与3D模型表情同步,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

系列文章目录

  1. React 使用 three.js 加载 gltf 3D模型 | three.js 入门
  2. React + three.js 3D模型骨骼绑定
  3. React + three.js 3D模型面部表情控制
  4. React + three.js 实现人脸动捕与3D模型表情同步

示例项目(github):https://github.com/couchette/simple-react-three-facial-expression-sync-demo
示例项目(gitcode):https://gitcode.com/qq_41456316/simple-react-three-facial-expression-sync-demo


文章目录

  • 系列文章目录
  • 前言
  • 一、实现步骤
    • 1、创建项目配置环境
    • 2. 创建组件
    • 3. 使用组件
    • 4. 运行项目
  • 总结
    • 程序预览


前言

在本系列的上一篇文章中,我们已经探讨了如何在 React 中利用 three.js 来操作模型面部表情,现在,我们将深入研究如何结合人脸特征点检测与模型表情控制实现人脸动作步骤并与3D模型表情同步。让我们一同探索如何赋予你的 3D 模型更加生动和丰富的表情吧!


一、实现步骤

1、创建项目配置环境

使用 create-reacte-app 创建项目

npx create-react-app simple-react-three-facial-expression-sync-demo
cd simple-react-three-facial-expression-sync-demo

安装three.js

npm i three
npm i @mediapipe/tasks-vision

将示例项目中的public中的内容复制到新创建的项目的public中(相关的模型文件)

2. 创建组件

src目录创建components文件夹,在components文件夹下面创建ThreeContainer.js文件。
首先创建组件,并获取return 元素的ref

import * as THREE from "three";
import { useRef, useEffect } from "react";function ThreeContainer() {const containerRef = useRef(null);const isContainerRunning = useRef(false);return <div ref={containerRef} />;
}export default ThreeContainer;

接着将three.js自动创建渲染元素添加到return组件中为子元素(可见container.appendChild(renderer.domElement);),相关逻辑代码在useEffect中执行,完整代码内容如下

import * as THREE from "three";import { OrbitControls } from "three/addons/controls/OrbitControls.js";
import { RoomEnvironment } from "three/addons/environments/RoomEnvironment.js";import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";
import { KTX2Loader } from "three/addons/loaders/KTX2Loader.js";
import { MeshoptDecoder } from "three/addons/libs/meshopt_decoder.module.js";import { GUI } from "three/addons/libs/lil-gui.module.min.js";
import { useRef, useEffect } from "react";// Mediapipeimport { FaceLandmarker, FilesetResolver } from "@mediapipe/tasks-vision";const blendshapesMap = {// '_neutral': '',browDownLeft: "browDown_L",browDownRight: "browDown_R",browInnerUp: "browInnerUp",browOuterUpLeft: "browOuterUp_L",browOuterUpRight: "browOuterUp_R",cheekPuff: "cheekPuff",cheekSquintLeft: "cheekSquint_L",cheekSquintRight: "cheekSquint_R",eyeBlinkLeft: "eyeBlink_L",eyeBlinkRight: "eyeBlink_R",eyeLookDownLeft: "eyeLookDown_L",eyeLookDownRight: "eyeLookDown_R",eyeLookInLeft: "eyeLookIn_L",eyeLookInRight: "eyeLookIn_R",eyeLookOutLeft: "eyeLookOut_L",eyeLookOutRight: "eyeLookOut_R",eyeLookUpLeft: "eyeLookUp_L",eyeLookUpRight: "eyeLookUp_R",eyeSquintLeft: "eyeSquint_L",eyeSquintRight: "eyeSquint_R",eyeWideLeft: "eyeWide_L",eyeWideRight: "eyeWide_R",jawForward: "jawForward",jawLeft: "jawLeft",jawOpen: "jawOpen",jawRight: "jawRight",mouthClose: "mouthClose",mouthDimpleLeft: "mouthDimple_L",mouthDimpleRight: "mouthDimple_R",mouthFrownLeft: "mouthFrown_L",mouthFrownRight: "mouthFrown_R",mouthFunnel: "mouthFunnel",mouthLeft: "mouthLeft",mouthLowerDownLeft: "mouthLowerDown_L",mouthLowerDownRight: "mouthLowerDown_R",mouthPressLeft: "mouthPress_L",mouthPressRight: "mouthPress_R",mouthPucker: "mouthPucker",mouthRight: "mouthRight",mouthRollLower: "mouthRollLower",mouthRollUpper: "mouthRollUpper",mouthShrugLower: "mouthShrugLower",mouthShrugUpper: "mouthShrugUpper",mouthSmileLeft: "mouthSmile_L",mouthSmileRight: "mouthSmile_R",mouthStretchLeft: "mouthStretch_L",mouthStretchRight: "mouthStretch_R",mouthUpperUpLeft: "mouthUpperUp_L",mouthUpperUpRight: "mouthUpperUp_R",noseSneerLeft: "noseSneer_L",noseSneerRight: "noseSneer_R",// '': 'tongueOut'
};function ThreeContainer() {const containerRef = useRef(null);const isContainerRunning = useRef(false);useEffect(() => {if (!isContainerRunning.current && containerRef.current) {isContainerRunning.current = true;init();}async function init() {const renderer = new THREE.WebGLRenderer({ antialias: true });renderer.setPixelRatio(window.devicePixelRatio);renderer.setSize(window.innerWidth, window.innerHeight);renderer.toneMapping = THREE.ACESFilmicToneMapping;containerRef.current.appendChild(renderer.domElement);const camera = new THREE.PerspectiveCamera(60,window.innerWidth / window.innerHeight,1,100);camera.position.z = 5;const scene = new THREE.Scene();scene.scale.x = -1;const environment = new RoomEnvironment(renderer);const pmremGenerator = new THREE.PMREMGenerator(renderer);scene.background = new THREE.Color(0x666666);scene.environment = pmremGenerator.fromScene(environment).texture;const controls = new OrbitControls(camera, renderer.domElement);// Facelet face, eyeL, eyeR;const eyeRotationLimit = THREE.MathUtils.degToRad(30);const ktx2Loader = new KTX2Loader().setTranscoderPath("/basis/").detectSupport(renderer);new GLTFLoader().setKTX2Loader(ktx2Loader).setMeshoptDecoder(MeshoptDecoder).load("models/facecap.glb", (gltf) => {const mesh = gltf.scene.children[0];scene.add(mesh);const head = mesh.getObjectByName("mesh_2");head.material = new THREE.MeshNormalMaterial();face = mesh.getObjectByName("mesh_2");eyeL = mesh.getObjectByName("eyeLeft");eyeR = mesh.getObjectByName("eyeRight");// GUIconst gui = new GUI();gui.close();const influences = head.morphTargetInfluences;for (const [key, value] of Object.entries(head.morphTargetDictionary)) {gui.add(influences, value, 0, 1, 0.01).name(key.replace("blendShape1.", "")).listen(influences);}renderer.setAnimationLoop(animation);});// Video Textureconst video = document.createElement("video");// const texture = new THREE.VideoTexture(video);// texture.colorSpace = THREE.SRGBColorSpace;const geometry = new THREE.PlaneGeometry(1, 1);const material = new THREE.MeshBasicMaterial({// map: texture,depthWrite: false,});const videomesh = new THREE.Mesh(geometry, material);scene.add(videomesh);// MediaPipeconst filesetResolver = await FilesetResolver.forVisionTasks(// "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm""fileset_resolver/wasm");const faceLandmarker = await FaceLandmarker.createFromOptions(filesetResolver,{baseOptions: {modelAssetPath:// "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task","ai_models/face_landmarker.task",delegate: "GPU",},outputFaceBlendshapes: true,outputFacialTransformationMatrixes: true,runningMode: "VIDEO",numFaces: 1,});if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {navigator.mediaDevices.getUserMedia({ video: { facingMode: "user" } }).then(function (stream) {video.srcObject = stream;video.play();}).catch(function (error) {console.error("Unable to access the camera/webcam.", error);});}const transform = new THREE.Object3D();function animation() {if (video.readyState >= HTMLMediaElement.HAVE_METADATA) {const results = faceLandmarker.detectForVideo(video, Date.now());console.log(results);if (results.facialTransformationMatrixes.length > 0) {const facialTransformationMatrixes =results.facialTransformationMatrixes[0].data;transform.matrix.fromArray(facialTransformationMatrixes);transform.matrix.decompose(transform.position,transform.quaternion,transform.scale);const object = scene.getObjectByName("grp_transform");object.position.x = transform.position.x;object.position.y = transform.position.z + 40;object.position.z = -transform.position.y;object.rotation.x = transform.rotation.x;object.rotation.y = transform.rotation.z;object.rotation.z = -transform.rotation.y;}if (results.faceBlendshapes.length > 0) {const faceBlendshapes = results.faceBlendshapes[0].categories;// Morph values does not exist on the eye meshes, so we map the eyes blendshape score into rotation valuesconst eyeScore = {leftHorizontal: 0,rightHorizontal: 0,leftVertical: 0,rightVertical: 0,};for (const blendshape of faceBlendshapes) {const categoryName = blendshape.categoryName;const score = blendshape.score;const index =face.morphTargetDictionary[blendshapesMap[categoryName]];if (index !== undefined) {face.morphTargetInfluences[index] = score;}// There are two blendshape for movement on each axis (up/down , in/out)// Add one and subtract the other to get the final score in -1 to 1 rangeswitch (categoryName) {case "eyeLookInLeft":eyeScore.leftHorizontal += score;break;case "eyeLookOutLeft":eyeScore.leftHorizontal -= score;break;case "eyeLookInRight":eyeScore.rightHorizontal -= score;break;case "eyeLookOutRight":eyeScore.rightHorizontal += score;break;case "eyeLookUpLeft":eyeScore.leftVertical -= score;break;case "eyeLookDownLeft":eyeScore.leftVertical += score;break;case "eyeLookUpRight":eyeScore.rightVertical -= score;break;case "eyeLookDownRight":eyeScore.rightVertical += score;break;}}eyeL.rotation.z = eyeScore.leftHorizontal * eyeRotationLimit;eyeR.rotation.z = eyeScore.rightHorizontal * eyeRotationLimit;eyeL.rotation.x = eyeScore.leftVertical * eyeRotationLimit;eyeR.rotation.x = eyeScore.rightVertical * eyeRotationLimit;}}videomesh.scale.x = video.videoWidth / 100;videomesh.scale.y = video.videoHeight / 100;renderer.render(scene, camera);controls.update();}window.addEventListener("resize", function () {camera.aspect = window.innerWidth / window.innerHeight;camera.updateProjectionMatrix();renderer.setSize(window.innerWidth, window.innerHeight);});}}, []);return <div ref={containerRef} />;
}export default ThreeContainer;

3. 使用组件

修改App.js的内容如下

import "./App.css";
import ThreeContainer from "./components/ThreeContainer";function App() {return (<div><ThreeContainer /></div>);
}export default App;

4. 运行项目

运行项目 npm start最终效果如下,模型会随着相机拍摄的人脸表情而变化,拍摄的图像显示部分的代码我已经注释掉了,如果想结合实际图像对比,可以放开相关注释。
请添加图片描述


总结

通过本文的介绍,相信读者对于在 React 中实现人脸动捕和3D模型表情同步有了初步的了解。如果你对此感兴趣,不妨动手尝试一下,可能会有意想不到的收获。同时,也欢迎大家多多探索,将 React 和 Three.js 的强大功能发挥到极致,为网页应用增添更多的乐趣和惊喜。

程序预览

{正在筹备}

这篇关于React + three.js 实现人脸动捕与3D模型表情同步的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/899275

相关文章

Vue3 的 shallowRef 和 shallowReactive:优化性能

大家对 Vue3 的 ref 和 reactive 都很熟悉,那么对 shallowRef 和 shallowReactive 是否了解呢? 在编程和数据结构中,“shallow”(浅层)通常指对数据结构的最外层进行操作,而不递归地处理其内部或嵌套的数据。这种处理方式关注的是数据结构的第一层属性或元素,而忽略更深层次的嵌套内容。 1. 浅层与深层的对比 1.1 浅层(Shallow) 定义

JS常用组件收集

收集了一些平时遇到的前端比较优秀的组件,方便以后开发的时候查找!!! 函数工具: Lodash 页面固定: stickUp、jQuery.Pin 轮播: unslider、swiper 开关: switch 复选框: icheck 气泡: grumble 隐藏元素: Headroom

大模型研发全揭秘:客服工单数据标注的完整攻略

在人工智能(AI)领域,数据标注是模型训练过程中至关重要的一步。无论你是新手还是有经验的从业者,掌握数据标注的技术细节和常见问题的解决方案都能为你的AI项目增添不少价值。在电信运营商的客服系统中,工单数据是客户问题和解决方案的重要记录。通过对这些工单数据进行有效标注,不仅能够帮助提升客服自动化系统的智能化水平,还能优化客户服务流程,提高客户满意度。本文将详细介绍如何在电信运营商客服工单的背景下进行

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

服务器集群同步时间手记

1.时间服务器配置(必须root用户) (1)检查ntp是否安装 [root@node1 桌面]# rpm -qa|grep ntpntp-4.2.6p5-10.el6.centos.x86_64fontpackages-filesystem-1.41-1.1.el6.noarchntpdate-4.2.6p5-10.el6.centos.x86_64 (2)修改ntp配置文件 [r

这15个Vue指令,让你的项目开发爽到爆

1. V-Hotkey 仓库地址: github.com/Dafrok/v-ho… Demo: 戳这里 https://dafrok.github.io/v-hotkey 安装: npm install --save v-hotkey 这个指令可以给组件绑定一个或多个快捷键。你想要通过按下 Escape 键后隐藏某个组件,按住 Control 和回车键再显示它吗?小菜一碟: <template

无人叉车3d激光slam多房间建图定位异常处理方案-墙体画线地图切分方案

墙体画线地图切分方案 针对问题:墙体两侧特征混淆误匹配,导致建图和定位偏差,表现为过门跳变、外月台走歪等 ·解决思路:预期的根治方案IGICP需要较长时间完成上线,先使用切分地图的工程化方案,即墙体两侧切分为不同地图,在某一侧只使用该侧地图进行定位 方案思路 切分原理:切分地图基于关键帧位置,而非点云。 理论基础:光照是直线的,一帧点云必定只能照射到墙的一侧,无法同时照到两侧实践考虑:关

【 html+css 绚丽Loading 】000046 三才归元阵

前言:哈喽,大家好,今天给大家分享html+css 绚丽Loading!并提供具体代码帮助大家深入理解,彻底掌握!创作不易,如果能帮助到大家或者给大家一些灵感和启发,欢迎收藏+关注哦 💕 目录 📚一、效果📚二、信息💡1.简介:💡2.外观描述:💡3.使用方式:💡4.战斗方式:💡5.提升:💡6.传说: 📚三、源代码,上代码,可以直接复制使用🎥效果🗂️目录✍️

【前端学习】AntV G6-08 深入图形与图形分组、自定义节点、节点动画(下)

【课程链接】 AntV G6:深入图形与图形分组、自定义节点、节点动画(下)_哔哩哔哩_bilibili 本章十吾老师讲解了一个复杂的自定义节点中,应该怎样去计算和绘制图形,如何给一个图形制作不间断的动画,以及在鼠标事件之后产生动画。(有点难,需要好好理解) <!DOCTYPE html><html><head><meta charset="UTF-8"><title>06

hdu1043(八数码问题,广搜 + hash(实现状态压缩) )

利用康拓展开将一个排列映射成一个自然数,然后就变成了普通的广搜题。 #include<iostream>#include<algorithm>#include<string>#include<stack>#include<queue>#include<map>#include<stdio.h>#include<stdlib.h>#include<ctype.h>#inclu