AI-Powered Facial Animation - LiveLink with NVIDIA Audio2Face & Unreal Engine Metahuman

  Рет қаралды 19,504

VJ SCHOOL

VJ SCHOOL

Күн бұрын

Пікірлер: 71
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Social Links: instagram.com/olegchomp/ twitter.com/oleg__chomp Discord: discord.com/invite/wNW8xkEjrf Blendshape list: "eyeBlinkLeft", "eyeLookDownLeft", "eyeLookInLeft", "eyeLookOutLeft", "eyeLookUpLeft", "eyeSquintLeft", "eyeWideLeft", "eyeBlinkRight", "eyeLookDownRight", "eyeLookInRight", "eyeLookOutRight", "eyeLookUpRight", "eyeSquintRight", "eyeWideRight", "jawForward", "jawLeft", "jawRight", "jawOpen", "mouthClose", "mouthFunnel", "mouthPucker", "mouthLeft", "mouthRight", "mouthSmileLeft", "mouthSmileRight", "mouthFrownLeft", "mouthFrownRight", "mouthDimpleLeft", "mouthDimpleRight", "mouthStretchLeft", "mouthStretchRight", "mouthRollLower", "mouthRollUpper", "mouthShrugLower", "mouthShrugUpper", "mouthPressLeft", "mouthPressRight", "mouthLowerDownLeft", "mouthLowerDownRight", "mouthUpperUpLeft", "mouthUpperUpRight", "browDownLeft", "browDownRight", "browInnerUp", "browOuterUpLeft", "browOuterUpRight", "cheekPuff", "cheekSquintLeft", "cheekSquintRight", "noseSneerLeft", "noseSneerRight", "tongueOut"
@mlgjman1837
@mlgjman1837 6 ай бұрын
Perfectly explained. Thank you!
@chanakhs
@chanakhs 10 ай бұрын
Thanks this is a great stuff. I have purchased the advanced tutorial few months ago and made good progress with the help of it. However I am trying to achieve something very essential in this workflow. Play some body animation in sync with the facial animation. I have some animation asset which I want to play when the face weights are being received by the live link or when the audio is playing in Unreal Engine. Essential and simple as it sounds. I haven't had a break through yet. Tried many things. a) "isPlaying (face)" returns false as there is no animation being played on the face its the ARkit face weights. b) Binding an event to "on Live Link updated" to play animation was totally useless, it just acts essentially like a tick in editor and when playing, does not indicate anything useful. c) "on controler map updated" does not get fired when weights are being passed d) Get Animation Frame data is always returning false. obviously as animation is not being played here. Any pointers would be appreciated.
@hailongwang7549
@hailongwang7549 Жыл бұрын
Thank you very much!
@alekai2178
@alekai2178 Жыл бұрын
Great vid! How come the animation at the end is not as accurate as the example animation you showed at the very beginning? thanks
@jumpieva
@jumpieva Жыл бұрын
this is a good step in the right direction, but in no way will pass for cinematics/up close dialogue. is there a way to make better facials? or do we have to still use something like character creator.
@msk.filmsahealingworld1723
@msk.filmsahealingworld1723 Жыл бұрын
Amazing, Thanks a lot. It so helpful. Can we link Maya with Audio 2 Face and adjust with blend shapes also with UE live link?
@alysaliu8890
@alysaliu8890 Жыл бұрын
Sorry but I don't see the comment with the list of blend shapes?
@yabuliao
@yabuliao Жыл бұрын
Found out it was in another video: kzbin.info/www/bejne/qpjUaqBnfcx0iac
@TopoVizio
@TopoVizio Жыл бұрын
Hey thanks for this! Is there documentation for maybe just using OSC to manipulate the facial animations and skip all of the omniverse stuff? I'm not trying to do anything with text to speech. Either way, great tutorial :)
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Hey! If you familiar with python, look in part about facsolver script. You will find list of blend shapes in first comment and just pass values to those elements in list. If you familiar with TouchDesigner, create constant chop with channel names as items in blendshape list (ex. eyeBlinkLeft) and send them with OSC to UE
@matthiass976
@matthiass976 4 ай бұрын
Hey, I am using audio2face 2023.2.0 and my Metahuman looks like she had a stroke. Does the Blendshape list work for this Version of audio2face also? And if I focus un unreal engine, my audio2face is starting to lag. Is it just because of my PC-Specs?
@sinaasadiyan
@sinaasadiyan 6 ай бұрын
hello, we have developed a n ai assistant and using tts we get audio response. how can we connect audio to A2F ?
@AndreSantos-nx2yf
@AndreSantos-nx2yf Жыл бұрын
Hi! The tutorial selling on boosty is more complete? Or is the same of this video? I have a project to develop something like your example using gpt. But I don't have much knowledge about this flow of the video, so if the tutorial is more complete will be great.
@p1ontek
@p1ontek Жыл бұрын
Hi thanks for your tutorial. Unfortunately at 3:25 my animation does not work for the arkit asset.
@彭亮-i9j
@彭亮-i9j 7 ай бұрын
hi, thanks for the great video. Do you know how to send the driven audio from A2F to UE?
@michaelmichaelguo2001
@michaelmichaelguo2001 Жыл бұрын
I hope you can create a tool that converts ARKit model exported as JSON file to CSV format exported by LiveLink Face. This way, we can use the LiveLink Importer plugin in UE to apply it to Daz characters without real-time recording.
@freenomon2466
@freenomon2466 Жыл бұрын
is there a way to access the metahuman facial rig controls in blueprint (the facial rig controller sliders)? I'd rather map the incoming data myself using the metahuman facial controllers..
@calvinwayne3017
@calvinwayne3017 7 ай бұрын
does the audio work doing it this way? currently the audio doesnt work for pixel streaming
@LeeSurber
@LeeSurber Жыл бұрын
Excellent tutorial.. I get to the part at 1:47 validating script editor and get error 22 Invalid Argument. Any ideas what I'm doing wrong.
@JacksonPopiel
@JacksonPopiel 11 ай бұрын
How were you able to get audio to stream from the gpt tts to audio2face?
@RuolinJia
@RuolinJia Жыл бұрын
Hello, after clicking on "setupblendshapesove", the blue head did not follow. I checked the facsSloper to ensure it was the same as in your video. Please tell me what to do
@shanmukhram9209
@shanmukhram9209 6 ай бұрын
me tooo im stuck at the same place! any one HELP?
@郭柱江
@郭柱江 Жыл бұрын
hallo,how can I make it support automatic emoticons?
@nikkho625
@nikkho625 Жыл бұрын
а морф такой убогий на липсинке пушо нет ру фонем или по какой то иной причине? я обычно выражаюсь резковато, нет намерений кого то задеть, действительно интересует ответ на вопрос. В целом я уже на пути к покупке айфона, но все же пока есть надежда наткнуться на что то более ли менее не ковыляющее черезмерно.
@MiyaL-x5r
@MiyaL-x5r Жыл бұрын
Hello, thank you for your tutorial. I encountered a problem in A2F. After writing the script according to the tutorial, the a2f Data Conversion tab of the audio2face interface will disappear directly. What is the reason?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
This happened if you made mistake somewhere in code or osc library not installed correctly
@ShreyansKothari-j1k
@ShreyansKothari-j1k Жыл бұрын
Hi! Firstly, great video- thank you! I implemented all the steps in the video, including the last part about the OSC disconnecting fix. Despite that, for some reason, the connection keeps breaking and then comes back on after a couple of seconds. Is there a way/is it possible to keep the osc connection open indefinitely without any disconnections?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Set variable with osc server. It should solve the issue
@REALVIBESTV
@REALVIBESTV 8 ай бұрын
This was not real time it was using a pre-recording audio
@VJSCHOOL
@VJSCHOOL 8 ай бұрын
You can change audio player in A2F to streaming and use for ex. microphone as input
@REALVIBESTV
@REALVIBESTV 8 ай бұрын
@@VJSCHOOLCould you please provide a video tutorial on this topic? Also, it would be helpful if you could pace the tutorial slower. I find that many KZbinrs go through the steps quickly, which can be challenging for beginners like me.
@VJSCHOOL
@VJSCHOOL 8 ай бұрын
@@REALVIBESTV search for audio2face livelink, there is a lot of tutorials
@갱스터깍지
@갱스터깍지 Жыл бұрын
Good
@DigitalDesignerAI
@DigitalDesignerAI Жыл бұрын
Does the updated Audio2Face livelink plugin now support this? I can't for the life of me figure out how to configure the blueprint to work similar to this.
@VJSCHOOL
@VJSCHOOL Жыл бұрын
With new update you don’t need to follow this tutorial
@DigitalDesignerAI
@DigitalDesignerAI Жыл бұрын
@@VJSCHOOL Thanks for the clarification. Still new to working with the Audio2Face application and plugin. Do you have any plans to upload an updated tutorial using the new updates to the plugin? I'm personally having issues linking everything up, and a tutorial would be super helpful. Thanks for everything! Awesome tutorial!
@shishi6631
@shishi6631 Жыл бұрын
Hi, thanks for your tutorial! It is really amazing to have this workflow. I met a question in A2F, I cannot find the data convertion panel, and I trid to trun it on in toolbar, it shows error said AttributeError:'AudiotoFaceExportExtention' object has no attribute '_exporter window'. Any clue for this? Thank you!
@jiazeli7762
@jiazeli7762 Жыл бұрын
me either
@VJSCHOOL
@VJSCHOOL Жыл бұрын
hey! I think it's better to ask on a2f forum. forums.developer.nvidia.com/c/omniverse/apps/audio2face/
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Quick update. Sometimes this happened if you made mistake in facsSolver script
@joshhuang4855
@joshhuang4855 Жыл бұрын
I am in the enabling facial poart but the AnimGraph doesn't have a custom control function nor a Modify curve, I am not sure what is wrong
@VJSCHOOL
@VJSCHOOL Жыл бұрын
You can create manually
@joshhuang4855
@joshhuang4855 Жыл бұрын
@@VJSCHOOL I did that but now it's giving me a warning saying cannot copy property (TMap -> TMap), and the animation is not working
@rachmadagungpambudi7820
@rachmadagungpambudi7820 Жыл бұрын
what if Facial Animation Live Link is included in the sequence? so Facial Animation is recorded into the sequence
@VJSCHOOL
@VJSCHOOL Жыл бұрын
You can install Omniverse plugin for UE and export A2F animation as USD file. After that you can animate it with sequencer.
@aymenselmi8318
@aymenselmi8318 Жыл бұрын
Hi thanks for this video, i just got an issue, i followed every step you did, but when i client on Localhost in the software, i get an error (failed to stat url omniverse://) how can i fix it ? thanks
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Try to install nucleus server. kzbin.info/www/bejne/hZ2Qk3aEd8ysfNk
@aymenselmi8318
@aymenselmi8318 Жыл бұрын
@@VJSCHOOL yeah i installed nucleus server, created an account, everything is fine but for some reason, A2F data conversion tab doesn't exist, i tried everything , any clue?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
@@aymenselmi8318 did you try to open pre-made scenes?
@Chatmanstreasure
@Chatmanstreasure Жыл бұрын
Do you have to have an iPhone for the bootsy project?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Nope. Boosty tutorial uses Al to generate speech and facial animation. You will need a little bit of Python knowledge to follow Boosty tutorial.
@rachmadagungpambudi7820
@rachmadagungpambudi7820 Жыл бұрын
it doesn't work, I've tried it still doesn't work
@VJSCHOOL
@VJSCHOOL Жыл бұрын
What step doesn’t work?
@rachmadagungpambudi7820
@rachmadagungpambudi7820 Жыл бұрын
@@VJSCHOOL I don't know, I've followed the steps on audio2face 2022.2.0 and unreal 5.1.1 but can't connect between omniverse and unreal. is there something wrong huh? I feel I have followed the steps
@VJSCHOOL
@VJSCHOOL Жыл бұрын
First, try to print values, then: If values not printed, means that something wrong with OSC. Check script in a2f or OSC server in UE. If values printed, something wrong with animBP
@rachmadagungpambudi7820
@rachmadagungpambudi7820 Жыл бұрын
@@VJSCHOOL I suspect it's OSC server, what should I do?
@rachmadagungpambudi7820
@rachmadagungpambudi7820 Жыл бұрын
​@@VJSCHOOL there is a message from the log "LogOSC: Warning: Outer object not set. OSCServer may be collected garbage if not referenced." Why?
@berniemovlab8323
@berniemovlab8323 Жыл бұрын
Is your blueprint on the Blueprint site?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Nope
@shorewiseapps2269
@shorewiseapps2269 Жыл бұрын
Hi Oleg, would you be interested in helping us with a AI Avatar project as a consultant?
@hwk_un1te915
@hwk_un1te915 Жыл бұрын
Great video🎉 everything works but the osc disconnects itself every 10 s and takes a while to reconnect- even though i done the checker. Im using ue 5.1 could you help me?
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Try to create variable with osc server, it should solve this.
@heresmynovel331
@heresmynovel331 Жыл бұрын
can it be used in ue5??? 🤔
@reznik63
@reznik63 Жыл бұрын
Друг, анимации кривые плагин сырой, хоть и времени занимает больше, но лучше и проще с помощью facelink делать все это. А с английским не паришься вообще
@VJSCHOOL
@VJSCHOOL Жыл бұрын
Так идея в том, что можно сделать генерацию ответов и голоса с помощью ИИ и использовать под разные задачи. Анимацию нужно настраивать под каждую голову, а не брать дефолтные значения. Записать лицо через facelink это совсем другая область применения.
@DiTo97
@DiTo97 10 ай бұрын
The UE avatar is lagging a bit, while the rest works fine. Any idea why? @VJSCHOOL
Analyze BPM - automated BPM detection in Resolume
3:18
VJ SCHOOL
Рет қаралды 6 М.
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 7 МЛН
Which One Is The Best - From Small To Giant #katebrush #shorts
00:17
Bike Vs Tricycle Fast Challenge
00:43
Russo
Рет қаралды 96 МЛН
Why Unreal Engine 5.4 is a Game Changer
12:46
Unreal Sensei
Рет қаралды 1,3 МЛН
James May finally drives the Tesla Cybertruck
14:15
James May’s Planet Gin
Рет қаралды 6 МЛН
Megascans is Now Free for All 3D Software
7:47
Unreal Sensei
Рет қаралды 87 М.
Character Lip Sync Tutorial with Nvidia Omniverse - Audio2Face
19:47
ProductionCrate
Рет қаралды 52 М.
Audio2Face Headless and RestAPI Overview
13:06
NVIDIA Omniverse
Рет қаралды 9 М.
Конец 3д художникам?
7:53
Алёша Погромист
Рет қаралды 33 М.
Tutorials: how to use the plugin
19:15
MetaHumanSDK
Рет қаралды 44 М.
MetaHuman Animator Tutorial | Unreal Engine 5
14:02
Bad Decisions Studio
Рет қаралды 424 М.
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 7 МЛН