Tutorials: how to use the plugin

  Рет қаралды 40,796

MetaHumanSDK

MetaHumanSDK

Күн бұрын

MetaHuman SDK is an automated AI solution to generate realistic animation for characters. This Unreal Engine plugin allows you to create and use lip sync animation generated by our cloud server.
We have prepared a detailed tutorial describing how to use our plugin:
-integrate TTS
-add audio to lip sync
-add audio to lip sync streaming
-integrate a chat bot
-combine everything into a single combo request
The tutorial presents the version of U.E. 5.1, which supports all previous versions of the Unreal Engine.
Try it yourself and share your impressions in the comments.
Timecode:
00:00 Intro
00:30 Create new project
01:06 Choosing an Avatar
01:42 Text To Speech
03:56 Audio to Lip Sync
07:15 Audio to Lip Sync Streaming
12:07 ChatBot Integration
14:08 How to use combo request
16:44 Custom Rig Integration
Link to our discord: discord.com/invite/kubCAZh37D....
Link to our website: metahumansdk.io/
Get the plugin for free from Unreal Engine marketplace: www.unrealengine.com/marketpl...
Official documentation: docs.metahumansdk.io/metahuma...
#unrealengine #metahumanman #MetaHumanSDK #digitalavatar

Пікірлер: 214
@AltVR_YouTube
@AltVR_YouTube Жыл бұрын
Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability
@arielshpitzer
@arielshpitzer 11 ай бұрын
It's updated. i think i saw a diffrent video looking almost the same. amazing work !
@user-qw3cq1bg9s
@user-qw3cq1bg9s Жыл бұрын
This is mind blowing!!!!!!
@honglabcokr
@honglabcokr Жыл бұрын
Thank you so much!
@mn04147
@mn04147 Жыл бұрын
thanks for your greak Plugin!
@dome7415
@dome7415 Жыл бұрын
awesome thx!
@TheAIAndy
@TheAIAndy Жыл бұрын
LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!
@metahumansdk
@metahumansdk Жыл бұрын
Hi! We used regular control rig to add poses in the sequencer timeline and make body animation manually in this tutorial
@TheAIAndy
@TheAIAndy 11 ай бұрын
@@metahumansdk haha as a beginner I have no idea what that means 😂 I’ll try to find a tutorial searching some of the words u said
@metahumansdk
@metahumansdk 10 ай бұрын
When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body. Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/
@user-dm1iy6nm8b
@user-dm1iy6nm8b Жыл бұрын
Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Currently our plugin just send full message to TTS services but you can separate text and send smaller parts manually.
@flytothetoon
@flytothetoon Жыл бұрын
Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?
@metahumansdk
@metahumansdk Жыл бұрын
Hi Fly to the Toon! You can select in the ATL eye blinking, also it works for ATL nodes.
@ragegohard9603
@ragegohard9603 Жыл бұрын
👀 wow !
@lukassarralde5439
@lukassarralde5439 11 ай бұрын
Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!
@metahumansdk
@metahumansdk 11 ай бұрын
Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/ I'll ask to the team about cases for games may be we can create tutorial about it.
@ffabiang
@ffabiang Жыл бұрын
Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?
@metahumansdk
@metahumansdk Жыл бұрын
Hi ffabian, you can use wav file without sound to generate facial animation from our SDK then use it for your project as idle😉
@ffabiang
@ffabiang Жыл бұрын
​@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?
@uzaker6577
@uzaker6577 11 ай бұрын
Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! Delay highly depends on the network connection and length of the sound. Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ? We will try to help.
@jumpieva
@jumpieva Жыл бұрын
The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can choose different TTS options such as Google, Azure and others.
@realskylgh
@realskylgh 11 ай бұрын
Great, does the combo do ATL Strinming things as well?
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! We are working on it. If all goes fine we add it in the nearest releases on 5.2
@AICineVerseStudios
@AICineVerseStudios 9 ай бұрын
Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.
@metahumansdk
@metahumansdk 9 ай бұрын
Hi! At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work. Here is the video about token attachment or generating new in the personal account: kzbin.info/www/bejne/aajQnpR7Yp2Upac&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab If it doesn't help please tell us and we try tio help with your issue.
@user-zp6jb5dw1l
@user-zp6jb5dw1l Жыл бұрын
Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can choose different emotions at the moment of lip sync generation from audio (speech to animation stage)
@borrowedtruths6955
@borrowedtruths6955 11 ай бұрын
I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?
@metahumansdk
@metahumansdk 10 ай бұрын
Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.
@ahmedismail772
@ahmedismail772 Жыл бұрын
it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) kzbin.info/www/bejne/mXSVfqWJirGabNU
@ahmedismail772
@ahmedismail772 Жыл бұрын
@@metahumansdk the link guide me to private video
@metahumansdk
@metahumansdk Жыл бұрын
@Ahmed Ismail my bad, replaced it to the correct link kzbin.info/www/bejne/mXSVfqWJirGabNU
@user-pf2se2df8v
@user-pf2se2df8v Жыл бұрын
Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! It depends on your solution. You can make a stream and make render on a server or you can make an app that will use client's device resources.
@sumitranjan7005
@sumitranjan7005 Жыл бұрын
this is great plugin with more detailed functionality also is it possible to integrate our own custom chatbot api? if yes please share a video
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL. As example you can use this tutorial when we use OpenAI plugin for chatbot kzbin.info/www/bejne/oYuVl4eKrNppeKc
@rajeshvaghela2772
@rajeshvaghela2772 10 ай бұрын
great tutorial.I got a perfect lip synch,but only one issue is the animation doesn't stop after the sound completes,can you help me out?
@metahumansdk
@metahumansdk 10 ай бұрын
Hi! Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk. You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo
@danD315D
@danD315D Жыл бұрын
Is it possible for audio to lip sync to work on other 3d character models, rather than meta human ones?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.
@NeoxEntertainment
@NeoxEntertainment 8 ай бұрын
Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end does anyone knows where i can find it ?
@metahumansdk
@metahumansdk 8 ай бұрын
Hi! Please open Content Browser settings and enable Engine and Plugins content as on the screenshot cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&
@devpatel8276
@devpatel8276 Жыл бұрын
Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! To use the generated audio in parts, first you need to call the Text To Speech function and then call the ATL stream function.
@devpatel8276
@devpatel8276 Жыл бұрын
@@metahumansdk And that can't be done by combo right?
@metahumansdk
@metahumansdk Жыл бұрын
You can add the same pipeline but connect it to other head so you can use few metahumans in the same time.
@hardikadoshi3568
@hardikadoshi3568 4 ай бұрын
I wonder if there is anything similar for Unity platform as well? Would be great if there is support available as the avatars look great.
@metahumansdk
@metahumansdk 4 ай бұрын
Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.
@arianakis3784
@arianakis3784 5 ай бұрын
I say go to the moon for a walk, and as soon as I spoke, I called to return, hahhahaaaa
@SaadSohail-ug9fl
@SaadSohail-ug9fl 2 ай бұрын
Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video
@metahumansdk
@metahumansdk Ай бұрын
Hi! You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.
@realskylgh
@realskylgh 11 ай бұрын
I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! We are working on the delays for now but on current version 3-4 seconds for the 1st chunk is nirmal situation.
@guilloisvincent2286
@guilloisvincent2286 Жыл бұрын
Would it be possible to put a TTS (like MaryTTS) or an LLM (like llama) in the c++ code, to avoid network calls and that it is free?
@metahumansdk
@metahumansdk Жыл бұрын
You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us. If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.
@dyter07
@dyter07 Жыл бұрын
Well, this 2000 years kater joke was good. I am waiting just 3 hours now to have the Metahuman loaded, LOL
@v-risetech1451
@v-risetech1451 Жыл бұрын
Hi, when i try to do same things from last tutorial, i can t see mh_ds_mapping in my project. Do you know anything about this for solve?
@metahumansdk
@metahumansdk Жыл бұрын
Hi V-Risetech! Please select Show Engine Content in Content Browser settings it should help. We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504
@corvetteee1
@corvetteee1 9 ай бұрын
Quick question. How can I add an idle animation to the body? When I've tried it so far, the head comes off of the model. Thanks for any help!
@metahumansdk
@metahumansdk 9 ай бұрын
Hi! You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800 Also we showed other but more difficult way with State Machines kzbin.info/www/bejne/pYrCkIKQdsZjf5Y&lc=UgzNwmwaQIB3hOhKE7F4AaABAg
@juanmacode
@juanmacode Жыл бұрын
Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Could you please specify how you are generating the soundwave and provide logs if possible?
@SKDyiyi
@SKDyiyi Жыл бұрын
Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.
@metahumansdk
@metahumansdk Жыл бұрын
Hi! If your avatar have not separated model you can blend an animation for the body and neck with our facial animation.
@SKDyiyi
@SKDyiyi Жыл бұрын
@@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?
@metahumansdk
@metahumansdk Жыл бұрын
You can mark Neck Movement in the ATL node to add it to the animation in MetahumanSDK plugin
@NiksCro96
@NiksCro96 4 ай бұрын
Hi, is there a way to do audio input as well as text input. Also is there a way for answer to be written as text in widget blueprint.
@metahumansdk
@metahumansdk 4 ай бұрын
Hi! You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes. I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here kzbin.info/www/bejne/oKPTcn16fs12fKc
@ai_and_chill
@ai_and_chill Жыл бұрын
how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?
@metahumansdk
@metahumansdk Жыл бұрын
We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681 And for animation we use EPositive emotion so it looks more expressive in our opinion.
@charleneteets8227
@charleneteets8227 11 ай бұрын
When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! You can try this video to fix the head kzbin.info/www/bejne/pYrCkIKQdsZjf5Y&lc=Ugz9BC
@anveegsinha4120
@anveegsinha4120 4 ай бұрын
2:12 hi, I dont see the Create Speech from text. I have added the API key as well.
@metahumansdk
@metahumansdk 4 ай бұрын
Hi! Did you try it on a wav file?
@user-zp6jb5dw1l
@user-zp6jb5dw1l Жыл бұрын
How to synchronize facial expressions with mouth movements? Could you provide a tutorial on this? Thank you
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.
@user-zp6jb5dw1l
@user-zp6jb5dw1l Жыл бұрын
Hi! Is the 'Explicit Emotion' option selected in the 'Create MetaHumanSDKATLInput' tab?
@user-zp6jb5dw1l
@user-zp6jb5dw1l Жыл бұрын
I selected 'Ehappy' and it works, but selecting 'Eangry' doesn't have any effect. Do you have any solutions or tutorials for this issue? Thank you!
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Can you please clarify, is the avatar not displaying the desired emotion or is the expression of the avatar not matching the chosen emotion.
@enriquemontero74
@enriquemontero74 Жыл бұрын
hello one question this is compatible with eleven labs api?? or voice notes? thanks
@metahumansdk
@metahumansdk Жыл бұрын
Hi! If they produce 16 bit wav files you can easely use it with our MetahumanSDK plugin.
@CanCan-gy5hh
@CanCan-gy5hh Жыл бұрын
Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8
@kreamonz
@kreamonz Ай бұрын
hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?
@kreamonz
@kreamonz Ай бұрын
I mean, how to edit the number of sampled keys/frames
@unrealvizzee
@unrealvizzee Жыл бұрын
Hi, I have a non Metahuman character with ARKit expressions (from Daz studio). How can I use this plugin with my character ?
@metahumansdk
@metahumansdk Жыл бұрын
You need to use skeleton of your avatar in the ATL node and arkit mapping mode. You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.
@blommer26
@blommer26 7 ай бұрын
Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?
@metahumansdk
@metahumansdk 7 ай бұрын
Hi! Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io
@Ali_k11
@Ali_k11 5 ай бұрын
h have same problem
@metahumansdk
@metahumansdk 5 ай бұрын
Hi! @Ali_k11, can you give some details about your issue?
@Relentless_Games
@Relentless_Games 2 ай бұрын
Error: fill api token via project settings First time using this sdk, how can I fix this?
@metahumansdk
@metahumansdk 2 ай бұрын
Please contact us through e-mail support@metahumansdk.io we will help you with token.
@asdfasdfsd
@asdfasdfsd Жыл бұрын
Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?
@metahumansdk
@metahumansdk Жыл бұрын
You need to mark it in the settings of Content Browser window
@jaykunwar3312
@jaykunwar3312 10 ай бұрын
can we make a build(exe) by using metahumansdk in which we can upload audio and metahuman start speaking and body idle animation?? please help
@metahumansdk
@metahumansdk 10 ай бұрын
Hi! Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637
@skyknightb
@skyknightb Жыл бұрын
Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?
@metahumansdk
@metahumansdk Жыл бұрын
Hi Skyknight! Can you tell little more about errors to our support on support@metahumansdk.io?
@skyknightb
@skyknightb Жыл бұрын
@@metahumansdk I'm already getting support on your discord, thanks :D
@bruninhohenrri
@bruninhohenrri 3 ай бұрын
Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations
@metahumansdk
@metahumansdk 2 ай бұрын
Hi! Please try to start from Talk Component. This is the easiest way to use Streaming options. Here is tutorial about it kzbin.info/www/bejne/oKPTcn16fs12fKc If you still have some issues please visit our discord discord.gg/MJmAaqtdN8
@Bruh-we9mv
@Bruh-we9mv 4 ай бұрын
Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?
@Bruh-we9mv
@Bruh-we9mv 4 ай бұрын
@@domagojmajetic9820 Sadly no, if I find anything I will write here
@metahumansdk
@metahumansdk 4 ай бұрын
At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.
@gavrielcohen7606
@gavrielcohen7606 3 ай бұрын
@@metahumansdk Hi, great tutorial. I was wondering if there is a payed version where we can exceed the 5 second limit?
@metahumansdk
@metahumansdk 3 ай бұрын
@gavrielcohen7606 hi! Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉
@rafaeltavares6162
@rafaeltavares6162 Жыл бұрын
hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other. I don't know if this has happened to anyone else. Can you give me some advice to solve this problem?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Is it possible to share a blueprint in our discord server? Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: kzbin.info/www/bejne/pYrCkIKQdsZjf5Y
@skeras1171
@skeras1171 Жыл бұрын
Hi, When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work. Best regards.
@metahumansdk
@metahumansdk Жыл бұрын
Hi @skeras! You need to mark for showing Engine Content and Plugins Content in the Content Browser
@skeras1171
@skeras1171 Жыл бұрын
@@metahumansdk Done,Thanks.
@luchobo7455
@luchobo7455 Жыл бұрын
Hi I really need your help, in 6:29 i drag and drop my BP_metahuman but is not showing up in the blueprint, don't know why
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You need to use metahuman from the Outliner of your scene but not directly from the Content Browser.
@syedhannaan2974
@syedhannaan2974 9 күн бұрын
I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same
@metahumansdk
@metahumansdk 7 күн бұрын
You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation. kzbin.info/www/bejne/oKPTcn16fs12fKc
@borrowedtruths6955
@borrowedtruths6955 11 ай бұрын
When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame. Can you help with these two issues? Thank you.
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! We recommend you to use this tutorial kzbin.info/www/bejne/pYrCkIKQdsZjf5Y Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉 If you need more advice please contact us in discord discord.gg/MJmAaqtdN8
@borrowedtruths6955
@borrowedtruths6955 11 ай бұрын
@@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.
@metahumansdk
@metahumansdk 11 ай бұрын
@borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically. We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1
@borrowedtruths6955
@borrowedtruths6955 11 ай бұрын
@@metahumansdk Thanks, I appreciate your time.
@ayrtonnasee3284
@ayrtonnasee3284 5 ай бұрын
i have the same problem
@phantomebo6537
@phantomebo6537 7 ай бұрын
I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here
@metahumansdk
@metahumansdk 7 ай бұрын
Hi! Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode. More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync Also you can ask for help in our Discord discord.gg/MJmAaqtdN8
@boyce-wei
@boyce-wei Жыл бұрын
Hello, why do I follow your steps, at 12:03, the sound ends but the mouth moves on and doesn't stop
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Could you please clarify if you are experiencing any performance issues?
@damncpp5518
@damncpp5518 Ай бұрын
im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation
@metahumansdk
@metahumansdk Ай бұрын
Hi! If i understand it right you have a delay between start of animation and sound. You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime kzbin.info/www/bejne/oKPTcn16fs12fKc If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io
@leion44
@leion44 Жыл бұрын
When will it be available for UE.2?
@metahumansdk
@metahumansdk Жыл бұрын
We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month. Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL You can try it right now 😉
@mwa8385
@mwa8385 15 күн бұрын
Can we have a step-by-step screen shots of it, please? it's very hard to follow the steps
@metahumansdk
@metahumansdk 9 күн бұрын
Please visit our Discord server discord.com/invite/kubCAZh37D or ask about advice to the e-mail support@metahumansdk.io
@qinjason1199
@qinjason1199 Жыл бұрын
The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?
@metahumansdk
@metahumansdk Жыл бұрын
Hi, Qin Jason! It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it. Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8
@qinjason1199
@qinjason1199 Жыл бұрын
TTS accessed from other cloud services,but it's really in the same blueprint.Would splitting into multiple blueprints avoid this problem?
@theforcexyz
@theforcexyz 9 ай бұрын
hi, im having problem at 2:32, when i generate my text to speech it does not appear in my folders :/
@metahumansdk
@metahumansdk 9 ай бұрын
Hi! Can you please check that your API token is correct in the project settings? If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io
@krishnakukade
@krishnakukade 11 ай бұрын
I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! You can use this official documentation from the UE developers docs.unrealengine.com/5.2/en-US/rendering-out-cinematic-movies-in-unreal-engine/
@funkyjeans8667
@funkyjeans8667 4 ай бұрын
it only seems to able to generate 5 second lipsync animation. Am i doing something wrong or longer animation is a paid option.
@metahumansdk
@metahumansdk 4 ай бұрын
If you use a trial tariff plan you can generate 5 seconds of ATL per one animation only.
@Matagirl001
@Matagirl001 Жыл бұрын
I cant find the ceil
@user-or1ky6zh2p
@user-or1ky6zh2p Жыл бұрын
Hi,I want to add some other facial movements when talking how can I do it like blinking etc.
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.
@user-or1ky6zh2p
@user-or1ky6zh2p Жыл бұрын
@@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Yes, this function can read the path to the local file. In this parameter you must specify the path to your audio file.
@user-or1ky6zh2p
@user-or1ky6zh2p Жыл бұрын
Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running
@metahumansdk
@metahumansdk Жыл бұрын
You can try to use blend for animations that you want to combine. You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/
@Ali_k11
@Ali_k11 5 ай бұрын
when i try the sdk on UE 5.3 i get no tts permission error,what's the matter?
@metahumansdk
@metahumansdk 5 ай бұрын
Hi! TTS available for Chatbot tariff plan only. You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458
@boyce-wei
@boyce-wei Жыл бұрын
At 10:11 in the video, when I scroll over it shows that the type of "CurrentChunk' is not compatible with Index, I don't know what's wrong
@boyce-wei
@boyce-wei Жыл бұрын
10:10
@boyce-wei
@boyce-wei Жыл бұрын
hello can you help me with this problem
@ffabiang
@ffabiang Жыл бұрын
hi, make sure CurrentChunk is of type integer aswell as index
@boyce-wei
@boyce-wei Жыл бұрын
@@ffabiang thank you
@umernaveed6936
@umernaveed6936 Жыл бұрын
Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Emotions are selected when you creating audio tracks from the text are selected in a special drop down menu. Please try
@umernaveed6936
@umernaveed6936 Жыл бұрын
@@metahumansdk can you elaborate a little on this as i am still stuck
@umernaveed6936
@umernaveed6936 Жыл бұрын
@@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character
@metahumansdk
@metahumansdk 11 ай бұрын
Hi! Sorry for the late answer. We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354
@AlejandroRamirez-ep3wo
@AlejandroRamirez-ep3wo Жыл бұрын
Hi, does this support Spanish or Italian?
@metahumansdk
@metahumansdk Жыл бұрын
Hi Alejandro Ramírez! You can use any language you want because animation is created from sound.
@aihumans.official
@aihumans.official Жыл бұрын
where I can connect my dialogflow chatbot? api key??
@metahumansdk
@metahumansdk Жыл бұрын
Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.
@kirkr
@kirkr Жыл бұрын
Is this still working? Says "unavailable" on the Unreal Marketplace
@metahumansdk
@metahumansdk Жыл бұрын
Hi! That was marketplace servers maintenance, now plugin is available to download.
@abhishekakodiya2206
@abhishekakodiya2206 Жыл бұрын
not working for me plugin doesn't genrates any lipsync anim
@metahumansdk
@metahumansdk Жыл бұрын
Please, send us more details to the our discord server or mail support@metahumansdk.io We will try to help with your issue
@mistert2962
@mistert2962 Жыл бұрын
Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.
@immortal3164
@immortal3164 8 ай бұрын
I want the metahuman to start talking only when im close to him, how i can achieve that?
@metahumansdk
@metahumansdk 8 ай бұрын
Hi! You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
@benshen9600
@benshen9600 Жыл бұрын
When will the combo request support Chinese?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.
@dreamyprod591
@dreamyprod591 2 ай бұрын
is there any way to integrate this on a website
@metahumansdk
@metahumansdk 2 ай бұрын
Sure, you can try to make a pixel streaming project for example.
@rachmadagungpambudi7820
@rachmadagungpambudi7820 11 ай бұрын
how to give flashing mocap?
@metahumansdk
@metahumansdk 11 ай бұрын
We didin't use mocap, our plugin generate animation from the sound
@rachmadagungpambudi7820
@rachmadagungpambudi7820 11 ай бұрын
I like Your Plugin 🫡🫡🫡👍 thank you
@anveegsinha4120
@anveegsinha4120 4 ай бұрын
I am getting error 401 no ATL permission
@metahumansdk
@metahumansdk 4 ай бұрын
Hi! It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation. If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL. Regular ATL available on the Liet, Standard and Pro tariffs.
@BluethunderMUSIC
@BluethunderMUSIC 3 ай бұрын
@@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.
@metahumansdk
@metahumansdk 3 ай бұрын
Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io? We will try to help you with this issue but we need more details about your case.
@BAYqg
@BAYqg Жыл бұрын
Unavailable to buy in Kyrgyzstan =(
@metahumansdk
@metahumansdk Жыл бұрын
Hi! Please check that 1. Other plugins is available 2. If you try to use our site make sure that EGS louncher is started 3. EGS louncher is updaterd
@sumitranjan7005
@sumitranjan7005 Жыл бұрын
can we get sample code git repo?
@metahumansdk
@metahumansdk Жыл бұрын
Hi! You can find plugin files in the engine folder \Engine\Plugins\Marketplace\DigitalHumanAnimation
@sumitranjan7005
@sumitranjan7005 Жыл бұрын
@@metahumansdk sample code of the project not the plugin to get started
@metahumansdk
@metahumansdk Жыл бұрын
We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project. You can find that in the demo folder of plugin.
@user-nn7mg3bp4u
@user-nn7mg3bp4u Жыл бұрын
my head is detached now
@metahumansdk
@metahumansdk Жыл бұрын
Hi Популярно в България ! You need to use Blend Per Bone node in the Face AnimBP to glue head to the body when both parts are animated.
@Enver7able
@Enver7able Жыл бұрын
@@metahumansdk How to do this?
@Fedexmaster91
@Fedexmaster91 Жыл бұрын
@@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body
@Fedexmaster91
@Fedexmaster91 Жыл бұрын
@@Enver7able I found this video on their discord channel: kzbin.info/www/bejne/pYrCkIKQdsZjf5Y&ab_channel=MetaHumanSDK
@user-nn7mg3bp4u
@user-nn7mg3bp4u Жыл бұрын
@@metahumansdk thanks!
@commanderskullySHepherdson
@commanderskullySHepherdson 10 ай бұрын
was pulling my hair out wondering why I couldnt get the plugin to work, then realised I hadnt generated a token! 🙃
@metahumansdk
@metahumansdk 10 ай бұрын
Hi! Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable
@mahdibazei7020
@mahdibazei7020 Ай бұрын
Can I use this on Android?
@metahumansdk
@metahumansdk Ай бұрын
Hi! We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.
@mohdafiqtajulnizam9421
@mohdafiqtajulnizam9421 9 ай бұрын
Please update this to 5.3 ....please!?
@metahumansdk
@metahumansdk 9 ай бұрын
Hi! Work in progress 👨‍🔧
@EnricoGolfettoMasella
@EnricoGolfettoMasella Жыл бұрын
The girls need some love dude. They look so sad and depressed :P:P...
@inteligenciafutura
@inteligenciafutura 2 ай бұрын
se debe pagar para usarlo, no funciona
@metahumansdk
@metahumansdk Ай бұрын
Hi! Can you please share more details about your issue? Perhape this tutorial can help you kzbin.info/www/bejne/mXSVfqWJirGabNU
@inteligenciafutura
@inteligenciafutura 2 ай бұрын
spanish?
@metahumansdk
@metahumansdk Ай бұрын
MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.
Tutorial: how to use Azure TTS in Metahuman SDK
1:17
MetaHumanSDK
Рет қаралды 4,1 М.
MetaHuman Animator Tutorial | Unreal Engine 5
14:02
Bad Decisions Studio
Рет қаралды 389 М.
3M❤️ #thankyou #shorts
00:16
ウエスP -Mr Uekusa- Wes-P
Рет қаралды 12 МЛН
The day of the sea 🌊 🤣❤️ #demariki
00:22
Demariki
Рет қаралды 102 МЛН
아이스크림으로 체감되는 요즘 물가
00:16
진영민yeongmin
Рет қаралды 4,9 МЛН
Was ist im Eis versteckt? 🧊 Coole Winter-Gadgets von Amazon
00:37
SMOL German
Рет қаралды 32 МЛН
Unreal Engine 5 - Ultimate Voice AI Tutorial - Masterclass from scratch
1:13:03
100+ Linux Things you Need to Know
12:23
Fireship
Рет қаралды 109 М.
Free Face Mocap NO IPHONE on your Metahuman
2:47
Space Orca Studio
Рет қаралды 18 М.
Turn AI Images into 3D Animated Characters: Tutorial
28:58
Prompt Muse
Рет қаралды 553 М.
Making Short Films in Unreal Engine 5 with Metahumans
45:13
Nafay 3D
Рет қаралды 26 М.
Why Unreal Engine 5.4 is a Game Changer
12:46
Unreal Sensei
Рет қаралды 1,1 МЛН
How to make AI NPC MetaHumans in Unreal Engine
7:09
Inworld AI
Рет қаралды 29 М.
Tutorial: Unreal Engine ChatGPT with Metahuman SDK
5:15
MetaHumanSDK
Рет қаралды 18 М.
3M❤️ #thankyou #shorts
00:16
ウエスP -Mr Uekusa- Wes-P
Рет қаралды 12 МЛН