Currently plugin is in review, I will publish soon in fab hopefully ☺
@smash97782 күн бұрын
There's no zen folder for me , wt to do now ?
@xlipdev2 күн бұрын
@@smash9778 which engine version do you use?
@smash97782 күн бұрын
@xlipdev 5.1.1
@tvgestaltung5 күн бұрын
Hi, your work is very impressive. I’m having trouble importing the file Calibration.mhaical into Unreal Engine 5.4. Error: unknown extension. Do you have a solution for this issue? Thank you very much!
@xlipdev5 күн бұрын
Thanks ^^ You need to enable Epic Games - MetaHuman plugin to import it.
@tvgestaltung5 күн бұрын
@@xlipdev Thank you very much for the super quick and correct answer. I was apparently too tired yesterday to realize it, as I thought I had already turned it on.
@satyaa9996 күн бұрын
unable to drag and drop mhaical calibration file into unreal engine, it is giving unknown file extension error, does anyone have solution for this?
@xlipdev6 күн бұрын
@@satyaa999 you must enable metahuman plugin
@Navhkrin6 күн бұрын
Reisim cok sagol sen olmasan halimiz kotu :(
@xlipdev6 күн бұрын
Rica ederim ise yaradiysa ne mutlu ama en mantiklisi gosterdigimi uygulamak yerine "layered control" rig kullanmak
@Ateruber8 күн бұрын
Amazing! Does this only work on RTX video cards or can it also work on GTX?
@xlipdev8 күн бұрын
@@Ateruber I believe there is no restriction for GPUs overall for metahuman performances, it should work
@cettura10 күн бұрын
What will be the price?
@xlipdev10 күн бұрын
I'm not certain yet 🤔but for the initial release, I’ll likely set the price between $15 and $25 (personal usage)
@cettura10 күн бұрын
@xlipdev nice, cant wait for the release and the full tutorial 🫶🔥
@mukeshbabu709211 күн бұрын
I'm having this problem. Please assist me in resolving this problem. error: OpenCV(4.10.0) ... error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor' This means that the cv2.cvtColor() function is being called on an empty image (_src.empty() is true). It indicates that OpenCV couldn’t read the image properly.
@xlipdev11 күн бұрын
This seems like an easy one, probably you didn't set a correct image path for the script please double check "input_image_path" (if you are trying to display) or "input_folder" (if you are trying to convert all images in that folder) and make sure you have'.png', '.jpg', '.jpeg' files inside that folder
@mukeshbabu709211 күн бұрын
Hello, I tried using a different computer twice as well, but the problem persisted.
@xlipdev11 күн бұрын
could you create an issue in the repo with the scripts you try to run and the details, lemme check and help
@xlipdev11 күн бұрын
btw can you try the path like ex: input_image_path = r".\images\some_frame.jpg" and adjust your path for the file or folder
@mukeshbabu709211 күн бұрын
@@xlipdev NameError: name 'image' is not defined PS C:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master> & "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/.venv/Scripts/python.exe" "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/face_mesh/create_single_sample_and_display.py" [ WARN:[email protected]] global loadsave.cpp:241 cv::findDecoder imread_('images/0001.jpg'): can't open/read file: check file path/integrity Traceback (most recent call last): File "c:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master\face_mesh\create_single_sample_and_display.py", line 37, in <module> image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.10.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:196: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
@xlipdev13 күн бұрын
Here’s a quick demo of the plugin I’m working on! kzbin.info/www/bejne/hmKVZWiiZcmSmdE If you have a moment, I’d love to hear your thoughts-any feedback would be super helpful before I send it for approval. 😊 Thank you!
@incrediblesarath13 күн бұрын
Cool!
@schiphuynh13 күн бұрын
yes!
@Dongtian-n2n13 күн бұрын
这是你制做的插件吗
@xlipdev13 күн бұрын
Yes
@animian13 күн бұрын
first! do i get a reward ? ;)
@xlipdev13 күн бұрын
😅 What's your wish?
@animian13 күн бұрын
@@xlipdev hmm would love to win the plugin or at least get it to try it out 😃, iv been following ur work for abit and got intregued, iv tryed ur approach through ur tutorial earlier but is abit finniky for mass use i feel, but this pluing solves that:) good job man!
@xlipdev13 күн бұрын
@@animian Done! Thanks for the kind words man ^^ It’d be awesome if you could try it out and share any feedback before I wrap it up, so I can ensure the quality ^^ Can you add me on discord
@animian13 күн бұрын
@@xlipdev nice nice, added u, tryed sending a msg :)
@bsasikff446419 күн бұрын
bro when you are releasing the plugin ????
@xlipdev19 күн бұрын
@@bsasikff4464 working on it ^^ but seems like it will take some time 🥲
@jesusforever44419 күн бұрын
This is too cute 😂😂😂💕
@manraytrace19 күн бұрын
Thank you! IK situation obviously not great but at least baking for arms now works. Are you getting baking working correctly for head and neck? I am getting NO baking for neck or head joints.
@xlipdev19 күн бұрын
@@manraytrace yea head/neck bones/controllers don't have correct backwards solve methods in the modules so they are not baked into modular rig correctly, the best solution actually is using "layered control rig" please try that you don't even need to apply what I did in this video
@JantzenProduktions20 күн бұрын
You Changed my live! Thank you sooooooooooooooooooo much! I wanted to do this for so long!
@JantzenProduktions20 күн бұрын
If you make a plugin for 5 buck i would drive to mark zuckerberg and proof he is an alien
@xlipdev19 күн бұрын
You are welcome 😅 Seems like plugin thing will take time 🥲 I'm still discovering how to make it work with Unreal's integrated python and i didn't write many plugins for Unreal 😆
@CG-yf4qi23 күн бұрын
this is great! is this an asset i can buy in marketplace?
@xlipdev23 күн бұрын
@@CG-yf4qi I didn't put it in the marketplace since I shared the full tutorial/process ^^ but yea good idea I will try to put it there
@CG-yf4qi20 күн бұрын
@@xlipdev yeah there are only 2 handholding assets in marketplace as of now. based on what i see here, yours would prob be the best
@xlipdev17 күн бұрын
I published it with some improvements as well ^^ www.fab.com/listings/abc24af2-7e72-4ae2-98f9-4c4464835929
@황호준-c6u24 күн бұрын
hii ModuleNotFoundError: No module named 'cv2'??? what that???
@xlipdev24 күн бұрын
@@황호준-c6u have you installed the requirements in README?
@Dongtian-n2n26 күн бұрын
用什么软件把图片转成深度图的
@xlipdev26 күн бұрын
I use python scripts ^^ I shared the source code in description you can take a look
@juanmaliceras27 күн бұрын
Wow! Really useful feature!
@xlipdev27 күн бұрын
Yea, Unreal is getting stronger ^^
@berriebuilds531027 күн бұрын
Actually I don't understand the code compiler part, do I also need a compiler to work because I don't know code,.. how do I create the depth maps Please help me mate.. love this video
@xlipdev27 күн бұрын
Yea the process is a bit manual for now, I will take time to convert into a plugin later on, basically you need python to run these scripts and that shouldn't be that hard you can follow the instructions in the repo README, if you have issues I can help
@ivonmorales2654Ай бұрын
From the moment I saw the first second of the video, I was hooked. I will apply what you have shown. And since it's proven that you have the talent, do you think this could be done for full-body animations? Microsoft published something similar. I leave you the links in case you are interested. Thanks for your contribution... KZbin won't let me put the links but I'll give you the title, Look Ma, no markers Holistic performance capture without the hassle ACM Transactions on Graphics
@xlipdevАй бұрын
@@ivonmorales2654 Many thanks for the info and kind words ^^ I will definitely check those, there are many apps and AIs for body motion capture even for free, I think Nvidia is also doing something about it, if you have an iPhone, life is easier for facial capture for now, so yea capturing full body simultaneously is not a big deal, here is an example which is shared by @paperino0 here in comments and looks nice kzbin.info/www/bejne/fKGWgWOqiNOMY7s , but no ihpone no problem still you can use this video's pipeline for facial capture ^^
@ivonmorales2654Ай бұрын
And of course you should create the plugin, many of us would buy it!
@xlipdevАй бұрын
@@ivonmorales2654 I will try then ^^
@josiahgilАй бұрын
Can this also work with neck movements? Thank you for this informative tutorial.
@xlipdevАй бұрын
@@josiahgil You are welcome ^^ Yes, head movement is also tracked by default during performance. Here is an official tutorial about how to adjust neck/head movement into body if you are looking for it kzbin.info/www/bejne/hJzFZXd7pL-ShLssi=d8KGYS_x5-vRq1g_&t=1731
@josiahgilАй бұрын
@@xlipdev thanks, I should've been more clear about what i meant, i meant neck flexing, like when speaking the neck stretch, neck flex, throat inhale, etc
@xlipdevАй бұрын
@@josiahgil oh I see, in metahuman skeleton there are not many bones in neck area (I think 2 or 3), during facial performance I believe only head bone is tracked, so capturing precise neck movements seems not possible initially, but you can have additional bones in that area and animate them correspondingly with your facial animation by yourself ^^
@josiahgilАй бұрын
@@xlipdev thank you🙏
@garryock2530Ай бұрын
Nice mechanic for 2p games
@xlipdevАй бұрын
@@garryock2530 Thanks for the comment ^^ In the end of series I touched a bit how to support multiple players and make different improvements depending on what you want but initially yea it is a cool mechanic for 2 players, also preparing a bonus video about an ai companion which follow you to hold current player's hand ^^
@ShinjiKeiVR10BetaUSA-s2tАй бұрын
You are amazing! I am making a 3D animation movie using my Metahuman. Someday I need your help.
@xlipdevАй бұрын
@@ShinjiKeiVR10BetaUSA-s2t Very cool! I hope my video helps ^^ You can always reach me out from repo social media etc. I can help ^^
@HongPongАй бұрын
finally the default characters settle down
@xlipdevАй бұрын
@@HongPong 😆
@DavidGFalzaranoАй бұрын
1 step closer to having a ai gf
@xlipdevАй бұрын
@@DavidGFalzarano 🤣
@789alizaidiАй бұрын
this is.... nice and unique
@xlipdevАй бұрын
@@789alizaidi Thanks a lot for sharing kind thoughts ^^ I'm happy you liked it
@screenapple1660Ай бұрын
holding hands. How do you make AI character go ragadolll after running?
@xlipdevАй бұрын
There is a playlist covers almost everything about AI logic in Unreal engine here kzbin.info/aero/PLNwKK6OwH7eW1n49TW6-FmiZhqRn97cRy and you can apply ragdoll in the same way triggering events basically
@jesusforever444Ай бұрын
This is so cute😅💕
@xlipdevАй бұрын
😆 development is fun
@crazyguy7585Ай бұрын
nice tutorial u got new subscriber my friend😊
@xlipdevАй бұрын
@@crazyguy7585 Thank you so much I'm happy you liked it ^^
@ThunderMan805Ай бұрын
U are Amazing ❤❤❤
@xlipdevАй бұрын
@@ThunderMan805 Thank youu for the support ❤️
@incrediblesarathАй бұрын
Thank you!
@xlipdevАй бұрын
@@incrediblesarath You are welcome, I hope I helped ^^
@ThunderMan805Ай бұрын
❤❤❤❤Made more how to make the character walk with us and stop with us 😊+ love u
@xlipdevАй бұрын
@@ThunderMan805 Thank youu ❤️ actually you will see "how to send a character to a place in level" example in this series but that's a good point I will try to add a bonus video to show how to make character follow you ^^
@eightriceАй бұрын
what about including body animation from a separate camera so that we could have a full-body performance?
@xlipdevАй бұрын
Good idea! There are many apps/AIs that you can capture body animation even for free, so yea that is also possible ^^
@paperino0Ай бұрын
Check out "The Darkest Age"'s mocap tutorial. he uses 2 cameras with a iphone head mount for facial mocap and regular video with moveAI's video2mocap app and records simultanously. you could combine both tutorials to get full mocap with android (or any video source)
@MashaBear-g8fАй бұрын
Sexy voice and brilliant mind! ❤ Thank you so much for taking my tutorial request-you're the only one on KZbin who has this tutorial. I hope your channel gets the recognition and views it deserves. Much love!
@xlipdevАй бұрын
Thank you sooo much for kind words ❤ I really liked the request, thought it was a nice challenge and tried my best I hope it helps ❤I'm always open for requests if you think you have a cool idea, lets do it ^^
@eightriceАй бұрын
can we make this work in real time from camera feed?
@xlipdevАй бұрын
Very good question! Technically yes but I didn't care about optimization initially in the scripts, probably it will require adjustments to create depth maps faster. And i also have never tried to create performance from camera feed before, it can be achieved but also need to check how the current pipeline works for camera feed
@caiyinke3404Ай бұрын
This is a very, very, very good tutorial!! By the way, I have one question: How is the head rotation animation capture performance?
@caiyinke3404Ай бұрын
Hi, after I drag and drop the calibration file this error pops out. Warning: Failed to import 'D:\Projects\YQ\ABCD\B_CHAOMO\faceDepthAI-master\faceDepthAI-master\iphone_lens_calibration\Calibration.mhaical'. Unknown extension 'mhaical'. i also run the python pip install -r requirements.txt and got this : Successfully installed CFFI-1.17.1 absl-py-2.1.0 attrs-24.2.0 contourpy-1.3.0 cycler-0.12.1 flatbuffers-24.3.25 fonttools-4.54.1 jax-0.4.34 jaxlib-0.4.34 kiwisolver-1.4.7 matplotlib-3.9.2 mediapipe-0.10.14 ml-dtypes-0.5.0 numpy-2.1.2 opencv-contrib-python-4.10.0.84 opencv-python-4.10.0.84 openexr-3.3.1 opt-einsum-3.4.0 packaging-24.1 pillow-10.4.0 protobuf-4.25.5 pycparser-2.22 pyparsing-3.1.4 python-dateutil-2.9.0.post0 scipy-1.14.1 six-1.16.0 sounddevice-0.5.0 trimesh-4.4.9
@xlipdevАй бұрын
@@caiyinke3404 Thanks ^^ Head is also tracked during performance by default but you can disable i believe, there are many vidoes around how to adjust the body with the head you can check, Just drag one metahuman to your level first and Unreal Engine will enable required plugins initially, probably some metahuman related plugins are not enabled in engine so thats why you cant import 'mhacial'
@madzorojuroАй бұрын
Good video you have earned yourself a subscriber
@xlipdevАй бұрын
@@madzorojuro thank youu ^^ you are the 100th one 🥳
@MarchantFilms-ef1dqАй бұрын
Amazing, thanks for sharing this process! I was wondering, can we use a depth map generated from other sources? Davinci Resolve can generate a depth map, and there are AIs even for free than can generate depth maps from image or video, do we need to convert this depth maps or can we use them directly with this process?
@xlipdevАй бұрын
@MarchantFilms-ef1dq for Unreal editor, depth data should be in .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) also you need to double check 0 and infinite values and arrange them, So unfortunately getting directly from an app is not going to work most likely. But still you can edit/modify your depth map to match these things and it should work. I initially started with midas depth generator AI model to create some depth maps but it didn't go well so I decided to create them myself 😅
@nevfelemrecicekАй бұрын
Hocam kaldırdık bekliyoruz.
@xlipdevАй бұрын
😂 Probably it will be a video series like step by step which will be better and easy to cover different parts of development Muhtemelen video serisi şeklinde koyucam adım adım anlatmak ve farklı bölümlerin geliştirilmesini açıklamak daha kolay olcak gibi bugün yarın koyarım 😅
@adomyipАй бұрын
Thanks for sharing this great method and the scripts!
@xlipdevАй бұрын
You're very welcome!
@pandaavintage2698Ай бұрын
That’s really cool! Thank you heaps. I’ve been trying to recreate the camera from tiny glade :)
@xlipdevАй бұрын
I'm glad if i helped ^^ you can check CameraDepthFade node as well maybe gives different ideas ^^
@pandaavintage2698Ай бұрын
@@xlipdev thank you for the idea! I’ll let you know how it goes :)
@nevfelemrecicekАй бұрын
Çok güzel anlatmışsın ağzına sağlık. Ek olarak bir de tick event'te "finterp to" kullanarak yapılabilirdi ek olarak. Tabii ki bunu bir boolean ile kontrol edebilir hatta bunu c++ ile de yazılabilir.
@xlipdevАй бұрын
Çok teşekkürler, evet önemli bir nokta "FInterpTo" metodu da kullanılabilir eğer curve objeleri ile uğraşmak istemiyorsak belirttiğin için çok sağ ol ^^ Ayrıca doğru C++ ile yazmak tabii ki daha efektif ve geniş bir kontrol sağlamak açısından daha iyi Thank you so much, yes, an important point is that the 'FInterpTo' method can also be used if we don't want to deal with curve objects, thanks a lot for mentioning it ^^ Also, writing it in C++ is, of course, more effective and provides broader control
@PaulGriswold1Ай бұрын
Is there anything different/special unusual about the depth maps? Could I literally use Davinci Resolve's depth map extractor with video to do the same thing?
@xlipdevАй бұрын
@@PaulGriswold1 for Unreal editor, depth data should be .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) So unfortunately getting directly from an app is not going to work most likely. But still you can edit your depth map to match these things and it should work.
@MashaBear-g8fАй бұрын
perfect approach! will you do two characters hold hand each other via their bones?
@xlipdevАй бұрын
Yes, I will check holding hands in real time topic and try to create a tutorial for it ♥
@original9Ай бұрын
@xlipdev i got this error whilst trying to install any ideas? -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for openexr Failed to build openexr ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (openexr)
@xlipdevАй бұрын
Seems like you have issues with cmake, make sure that CMake is installed and available in your system's PATH maybe you need to update it, or maybe issues is related to one of below You don't have a compatible C++ compiler (such as gcc for Linux or MinGW for Windows) openexr may not be compatible with the version of Python you're using, update python