Thanks this is a great stuff. I have purchased the advanced tutorial few months ago and made good progress with the help of it. However I am trying to achieve something very essential in this workflow. Play some body animation in sync with the facial animation. I have some animation asset which I want to play when the face weights are being received by the live link or when the audio is playing in Unreal Engine. Essential and simple as it sounds. I haven't had a break through yet. Tried many things. a) "isPlaying (face)" returns false as there is no animation being played on the face its the ARkit face weights. b) Binding an event to "on Live Link updated" to play animation was totally useless, it just acts essentially like a tick in editor and when playing, does not indicate anything useful. c) "on controler map updated" does not get fired when weights are being passed d) Get Animation Frame data is always returning false. obviously as animation is not being played here. Any pointers would be appreciated.
@hailongwang7549 Жыл бұрын
Thank you very much!
@alekai2178 Жыл бұрын
Great vid! How come the animation at the end is not as accurate as the example animation you showed at the very beginning? thanks
@jumpieva Жыл бұрын
this is a good step in the right direction, but in no way will pass for cinematics/up close dialogue. is there a way to make better facials? or do we have to still use something like character creator.
@msk.filmsahealingworld1723 Жыл бұрын
Amazing, Thanks a lot. It so helpful. Can we link Maya with Audio 2 Face and adjust with blend shapes also with UE live link?
@alysaliu8890 Жыл бұрын
Sorry but I don't see the comment with the list of blend shapes?
@yabuliao Жыл бұрын
Found out it was in another video: kzbin.info/www/bejne/qpjUaqBnfcx0iac
@TopoVizio Жыл бұрын
Hey thanks for this! Is there documentation for maybe just using OSC to manipulate the facial animations and skip all of the omniverse stuff? I'm not trying to do anything with text to speech. Either way, great tutorial :)
@VJSCHOOL Жыл бұрын
Hey! If you familiar with python, look in part about facsolver script. You will find list of blend shapes in first comment and just pass values to those elements in list. If you familiar with TouchDesigner, create constant chop with channel names as items in blendshape list (ex. eyeBlinkLeft) and send them with OSC to UE
@matthiass9764 ай бұрын
Hey, I am using audio2face 2023.2.0 and my Metahuman looks like she had a stroke. Does the Blendshape list work for this Version of audio2face also? And if I focus un unreal engine, my audio2face is starting to lag. Is it just because of my PC-Specs?
@sinaasadiyan6 ай бұрын
hello, we have developed a n ai assistant and using tts we get audio response. how can we connect audio to A2F ?
@AndreSantos-nx2yf Жыл бұрын
Hi! The tutorial selling on boosty is more complete? Or is the same of this video? I have a project to develop something like your example using gpt. But I don't have much knowledge about this flow of the video, so if the tutorial is more complete will be great.
@p1ontek Жыл бұрын
Hi thanks for your tutorial. Unfortunately at 3:25 my animation does not work for the arkit asset.
@彭亮-i9j7 ай бұрын
hi, thanks for the great video. Do you know how to send the driven audio from A2F to UE?
@michaelmichaelguo2001 Жыл бұрын
I hope you can create a tool that converts ARKit model exported as JSON file to CSV format exported by LiveLink Face. This way, we can use the LiveLink Importer plugin in UE to apply it to Daz characters without real-time recording.
@freenomon2466 Жыл бұрын
is there a way to access the metahuman facial rig controls in blueprint (the facial rig controller sliders)? I'd rather map the incoming data myself using the metahuman facial controllers..
@calvinwayne30177 ай бұрын
does the audio work doing it this way? currently the audio doesnt work for pixel streaming
@LeeSurber Жыл бұрын
Excellent tutorial.. I get to the part at 1:47 validating script editor and get error 22 Invalid Argument. Any ideas what I'm doing wrong.
@JacksonPopiel11 ай бұрын
How were you able to get audio to stream from the gpt tts to audio2face?
@RuolinJia Жыл бұрын
Hello, after clicking on "setupblendshapesove", the blue head did not follow. I checked the facsSloper to ensure it was the same as in your video. Please tell me what to do
@shanmukhram92096 ай бұрын
me tooo im stuck at the same place! any one HELP?
@郭柱江 Жыл бұрын
hallo,how can I make it support automatic emoticons?
@nikkho625 Жыл бұрын
а морф такой убогий на липсинке пушо нет ру фонем или по какой то иной причине? я обычно выражаюсь резковато, нет намерений кого то задеть, действительно интересует ответ на вопрос. В целом я уже на пути к покупке айфона, но все же пока есть надежда наткнуться на что то более ли менее не ковыляющее черезмерно.
@MiyaL-x5r Жыл бұрын
Hello, thank you for your tutorial. I encountered a problem in A2F. After writing the script according to the tutorial, the a2f Data Conversion tab of the audio2face interface will disappear directly. What is the reason?
@VJSCHOOL Жыл бұрын
This happened if you made mistake somewhere in code or osc library not installed correctly
@ShreyansKothari-j1k Жыл бұрын
Hi! Firstly, great video- thank you! I implemented all the steps in the video, including the last part about the OSC disconnecting fix. Despite that, for some reason, the connection keeps breaking and then comes back on after a couple of seconds. Is there a way/is it possible to keep the osc connection open indefinitely without any disconnections?
@VJSCHOOL Жыл бұрын
Set variable with osc server. It should solve the issue
@REALVIBESTV8 ай бұрын
This was not real time it was using a pre-recording audio
@VJSCHOOL8 ай бұрын
You can change audio player in A2F to streaming and use for ex. microphone as input
@REALVIBESTV8 ай бұрын
@@VJSCHOOLCould you please provide a video tutorial on this topic? Also, it would be helpful if you could pace the tutorial slower. I find that many KZbinrs go through the steps quickly, which can be challenging for beginners like me.
@VJSCHOOL8 ай бұрын
@@REALVIBESTV search for audio2face livelink, there is a lot of tutorials
@갱스터깍지 Жыл бұрын
Good
@DigitalDesignerAI Жыл бұрын
Does the updated Audio2Face livelink plugin now support this? I can't for the life of me figure out how to configure the blueprint to work similar to this.
@VJSCHOOL Жыл бұрын
With new update you don’t need to follow this tutorial
@DigitalDesignerAI Жыл бұрын
@@VJSCHOOL Thanks for the clarification. Still new to working with the Audio2Face application and plugin. Do you have any plans to upload an updated tutorial using the new updates to the plugin? I'm personally having issues linking everything up, and a tutorial would be super helpful. Thanks for everything! Awesome tutorial!
@shishi6631 Жыл бұрын
Hi, thanks for your tutorial! It is really amazing to have this workflow. I met a question in A2F, I cannot find the data convertion panel, and I trid to trun it on in toolbar, it shows error said AttributeError:'AudiotoFaceExportExtention' object has no attribute '_exporter window'. Any clue for this? Thank you!
@jiazeli7762 Жыл бұрын
me either
@VJSCHOOL Жыл бұрын
hey! I think it's better to ask on a2f forum. forums.developer.nvidia.com/c/omniverse/apps/audio2face/
@VJSCHOOL Жыл бұрын
Quick update. Sometimes this happened if you made mistake in facsSolver script
@joshhuang4855 Жыл бұрын
I am in the enabling facial poart but the AnimGraph doesn't have a custom control function nor a Modify curve, I am not sure what is wrong
@VJSCHOOL Жыл бұрын
You can create manually
@joshhuang4855 Жыл бұрын
@@VJSCHOOL I did that but now it's giving me a warning saying cannot copy property (TMap -> TMap), and the animation is not working
@rachmadagungpambudi7820 Жыл бұрын
what if Facial Animation Live Link is included in the sequence? so Facial Animation is recorded into the sequence
@VJSCHOOL Жыл бұрын
You can install Omniverse plugin for UE and export A2F animation as USD file. After that you can animate it with sequencer.
@aymenselmi8318 Жыл бұрын
Hi thanks for this video, i just got an issue, i followed every step you did, but when i client on Localhost in the software, i get an error (failed to stat url omniverse://) how can i fix it ? thanks
@VJSCHOOL Жыл бұрын
Try to install nucleus server. kzbin.info/www/bejne/hZ2Qk3aEd8ysfNk
@aymenselmi8318 Жыл бұрын
@@VJSCHOOL yeah i installed nucleus server, created an account, everything is fine but for some reason, A2F data conversion tab doesn't exist, i tried everything , any clue?
@VJSCHOOL Жыл бұрын
@@aymenselmi8318 did you try to open pre-made scenes?
@Chatmanstreasure Жыл бұрын
Do you have to have an iPhone for the bootsy project?
@VJSCHOOL Жыл бұрын
Nope. Boosty tutorial uses Al to generate speech and facial animation. You will need a little bit of Python knowledge to follow Boosty tutorial.
@rachmadagungpambudi7820 Жыл бұрын
it doesn't work, I've tried it still doesn't work
@VJSCHOOL Жыл бұрын
What step doesn’t work?
@rachmadagungpambudi7820 Жыл бұрын
@@VJSCHOOL I don't know, I've followed the steps on audio2face 2022.2.0 and unreal 5.1.1 but can't connect between omniverse and unreal. is there something wrong huh? I feel I have followed the steps
@VJSCHOOL Жыл бұрын
First, try to print values, then: If values not printed, means that something wrong with OSC. Check script in a2f or OSC server in UE. If values printed, something wrong with animBP
@rachmadagungpambudi7820 Жыл бұрын
@@VJSCHOOL I suspect it's OSC server, what should I do?
@rachmadagungpambudi7820 Жыл бұрын
@@VJSCHOOL there is a message from the log "LogOSC: Warning: Outer object not set. OSCServer may be collected garbage if not referenced." Why?
@berniemovlab8323 Жыл бұрын
Is your blueprint on the Blueprint site?
@VJSCHOOL Жыл бұрын
Nope
@shorewiseapps2269 Жыл бұрын
Hi Oleg, would you be interested in helping us with a AI Avatar project as a consultant?
@hwk_un1te915 Жыл бұрын
Great video🎉 everything works but the osc disconnects itself every 10 s and takes a while to reconnect- even though i done the checker. Im using ue 5.1 could you help me?
@VJSCHOOL Жыл бұрын
Try to create variable with osc server, it should solve this.
@heresmynovel331 Жыл бұрын
can it be used in ue5??? 🤔
@reznik63 Жыл бұрын
Друг, анимации кривые плагин сырой, хоть и времени занимает больше, но лучше и проще с помощью facelink делать все это. А с английским не паришься вообще
@VJSCHOOL Жыл бұрын
Так идея в том, что можно сделать генерацию ответов и голоса с помощью ИИ и использовать под разные задачи. Анимацию нужно настраивать под каждую голову, а не брать дефолтные значения. Записать лицо через facelink это совсем другая область применения.
@DiTo9710 ай бұрын
The UE avatar is lagging a bit, while the rest works fine. Any idea why? @VJSCHOOL