Here’s a quick demo of the plugin I’m working on! kzbin.info/www/bejne/hmKVZWiiZcmSmdE If you have a moment, I’d love to hear your thoughts-any feedback would be super helpful before I send it for approval. 😊 Thank you!
@ivonmorales2654Ай бұрын
And of course you should create the plugin, many of us would buy it!
@xlipdevАй бұрын
@@ivonmorales2654 I will try then ^^
@rklehmАй бұрын
I use a similar pipeline than yours... I never really understood why Unreal don't do this as default, it is so simple and just works.
@xlipdevАй бұрын
Great! Nice to see I reached the people who is playing with it as well ^^ and exactly! that was my point 😅
@amirhasani6401Ай бұрын
Bro, I usually don't comment or like on KZbin, but you are really a genius If I could, I would personally find your location and thank you Good Luck
@xlipdevАй бұрын
Thanks a lot man for the kind words ^^ I'm glad if I helped in any way ^^
@squeezypixelsАй бұрын
Thanks for the video. As I see this gives the best results for non iPhone videos.
@JantzenProduktions19 күн бұрын
You Changed my live! Thank you sooooooooooooooooooo much! I wanted to do this for so long!
@JantzenProduktions19 күн бұрын
If you make a plugin for 5 buck i would drive to mark zuckerberg and proof he is an alien
@xlipdev19 күн бұрын
You are welcome 😅 Seems like plugin thing will take time 🥲 I'm still discovering how to make it work with Unreal's integrated python and i didn't write many plugins for Unreal 😆
@Ateruber8 күн бұрын
Amazing! Does this only work on RTX video cards or can it also work on GTX?
@xlipdev8 күн бұрын
@@Ateruber I believe there is no restriction for GPUs overall for metahuman performances, it should work
@caiyinke3404Ай бұрын
This is a very, very, very good tutorial!! By the way, I have one question: How is the head rotation animation capture performance?
@caiyinke3404Ай бұрын
Hi, after I drag and drop the calibration file this error pops out. Warning: Failed to import 'D:\Projects\YQ\ABCD\B_CHAOMO\faceDepthAI-master\faceDepthAI-master\iphone_lens_calibration\Calibration.mhaical'. Unknown extension 'mhaical'. i also run the python pip install -r requirements.txt and got this : Successfully installed CFFI-1.17.1 absl-py-2.1.0 attrs-24.2.0 contourpy-1.3.0 cycler-0.12.1 flatbuffers-24.3.25 fonttools-4.54.1 jax-0.4.34 jaxlib-0.4.34 kiwisolver-1.4.7 matplotlib-3.9.2 mediapipe-0.10.14 ml-dtypes-0.5.0 numpy-2.1.2 opencv-contrib-python-4.10.0.84 opencv-python-4.10.0.84 openexr-3.3.1 opt-einsum-3.4.0 packaging-24.1 pillow-10.4.0 protobuf-4.25.5 pycparser-2.22 pyparsing-3.1.4 python-dateutil-2.9.0.post0 scipy-1.14.1 six-1.16.0 sounddevice-0.5.0 trimesh-4.4.9
@xlipdevАй бұрын
@@caiyinke3404 Thanks ^^ Head is also tracked during performance by default but you can disable i believe, there are many vidoes around how to adjust the body with the head you can check, Just drag one metahuman to your level first and Unreal Engine will enable required plugins initially, probably some metahuman related plugins are not enabled in engine so thats why you cant import 'mhacial'
@adomyipАй бұрын
Thanks for sharing this great method and the scripts!
@xlipdevАй бұрын
You're very welcome!
@MarchantFilms-ef1dqАй бұрын
Amazing, thanks for sharing this process! I was wondering, can we use a depth map generated from other sources? Davinci Resolve can generate a depth map, and there are AIs even for free than can generate depth maps from image or video, do we need to convert this depth maps or can we use them directly with this process?
@xlipdevАй бұрын
@MarchantFilms-ef1dq for Unreal editor, depth data should be in .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) also you need to double check 0 and infinite values and arrange them, So unfortunately getting directly from an app is not going to work most likely. But still you can edit/modify your depth map to match these things and it should work. I initially started with midas depth generator AI model to create some depth maps but it didn't go well so I decided to create them myself 😅
@HatipAksüngerАй бұрын
Great idea thanks for the video!
@juanmaliceras26 күн бұрын
Wow! Really useful feature!
@xlipdev26 күн бұрын
Yea, Unreal is getting stronger ^^
@madzorojuroАй бұрын
Good video you have earned yourself a subscriber
@xlipdevАй бұрын
@@madzorojuro thank youu ^^ you are the 100th one 🥳
@ivonmorales2654Ай бұрын
From the moment I saw the first second of the video, I was hooked. I will apply what you have shown. And since it's proven that you have the talent, do you think this could be done for full-body animations? Microsoft published something similar. I leave you the links in case you are interested. Thanks for your contribution... KZbin won't let me put the links but I'll give you the title, Look Ma, no markers Holistic performance capture without the hassle ACM Transactions on Graphics
@xlipdevАй бұрын
@@ivonmorales2654 Many thanks for the info and kind words ^^ I will definitely check those, there are many apps and AIs for body motion capture even for free, I think Nvidia is also doing something about it, if you have an iPhone, life is easier for facial capture for now, so yea capturing full body simultaneously is not a big deal, here is an example which is shared by @paperino0 here in comments and looks nice kzbin.info/www/bejne/fKGWgWOqiNOMY7s , but no ihpone no problem still you can use this video's pipeline for facial capture ^^
@josiahgilАй бұрын
Can this also work with neck movements? Thank you for this informative tutorial.
@xlipdevАй бұрын
@@josiahgil You are welcome ^^ Yes, head movement is also tracked by default during performance. Here is an official tutorial about how to adjust neck/head movement into body if you are looking for it kzbin.info/www/bejne/hJzFZXd7pL-ShLssi=d8KGYS_x5-vRq1g_&t=1731
@josiahgilАй бұрын
@@xlipdev thanks, I should've been more clear about what i meant, i meant neck flexing, like when speaking the neck stretch, neck flex, throat inhale, etc
@xlipdevАй бұрын
@@josiahgil oh I see, in metahuman skeleton there are not many bones in neck area (I think 2 or 3), during facial performance I believe only head bone is tracked, so capturing precise neck movements seems not possible initially, but you can have additional bones in that area and animate them correspondingly with your facial animation by yourself ^^
@josiahgilАй бұрын
@@xlipdev thank you🙏
@bsasikff446418 күн бұрын
bro when you are releasing the plugin ????
@xlipdev18 күн бұрын
@@bsasikff4464 working on it ^^ but seems like it will take some time 🥲
@tvgestaltung5 күн бұрын
Hi, your work is very impressive. I’m having trouble importing the file Calibration.mhaical into Unreal Engine 5.4. Error: unknown extension. Do you have a solution for this issue? Thank you very much!
@xlipdev5 күн бұрын
Thanks ^^ You need to enable Epic Games - MetaHuman plugin to import it.
@tvgestaltung5 күн бұрын
@@xlipdev Thank you very much for the super quick and correct answer. I was apparently too tired yesterday to realize it, as I thought I had already turned it on.
@ShinjiKeiVR10BetaUSA-s2tАй бұрын
You are amazing! I am making a 3D animation movie using my Metahuman. Someday I need your help.
@xlipdevАй бұрын
@@ShinjiKeiVR10BetaUSA-s2t Very cool! I hope my video helps ^^ You can always reach me out from repo social media etc. I can help ^^
@eightriceАй бұрын
what about including body animation from a separate camera so that we could have a full-body performance?
@xlipdevАй бұрын
Good idea! There are many apps/AIs that you can capture body animation even for free, so yea that is also possible ^^
@paperino0Ай бұрын
Check out "The Darkest Age"'s mocap tutorial. he uses 2 cameras with a iphone head mount for facial mocap and regular video with moveAI's video2mocap app and records simultanously. you could combine both tutorials to get full mocap with android (or any video source)
@berriebuilds531027 күн бұрын
Actually I don't understand the code compiler part, do I also need a compiler to work because I don't know code,.. how do I create the depth maps Please help me mate.. love this video
@xlipdev27 күн бұрын
Yea the process is a bit manual for now, I will take time to convert into a plugin later on, basically you need python to run these scripts and that shouldn't be that hard you can follow the instructions in the repo README, if you have issues I can help
@incrediblesarathАй бұрын
Thank you!
@xlipdevАй бұрын
@@incrediblesarath You are welcome, I hope I helped ^^
@PaulGriswold1Ай бұрын
Is there anything different/special unusual about the depth maps? Could I literally use Davinci Resolve's depth map extractor with video to do the same thing?
@xlipdevАй бұрын
@@PaulGriswold1 for Unreal editor, depth data should be .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) So unfortunately getting directly from an app is not going to work most likely. But still you can edit your depth map to match these things and it should work.
@Dongtian-n2n26 күн бұрын
用什么软件把图片转成深度图的
@xlipdev26 күн бұрын
I use python scripts ^^ I shared the source code in description you can take a look
@original9Ай бұрын
@xlipdev i got this error whilst trying to install any ideas? -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for openexr Failed to build openexr ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (openexr)
@xlipdevАй бұрын
Seems like you have issues with cmake, make sure that CMake is installed and available in your system's PATH maybe you need to update it, or maybe issues is related to one of below You don't have a compatible C++ compiler (such as gcc for Linux or MinGW for Windows) openexr may not be compatible with the version of Python you're using, update python
@satyaa9996 күн бұрын
unable to drag and drop mhaical calibration file into unreal engine, it is giving unknown file extension error, does anyone have solution for this?
@xlipdev5 күн бұрын
@@satyaa999 you must enable metahuman plugin
@mukeshbabu709211 күн бұрын
I'm having this problem. Please assist me in resolving this problem. error: OpenCV(4.10.0) ... error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor' This means that the cv2.cvtColor() function is being called on an empty image (_src.empty() is true). It indicates that OpenCV couldn’t read the image properly.
@xlipdev11 күн бұрын
This seems like an easy one, probably you didn't set a correct image path for the script please double check "input_image_path" (if you are trying to display) or "input_folder" (if you are trying to convert all images in that folder) and make sure you have'.png', '.jpg', '.jpeg' files inside that folder
@mukeshbabu709210 күн бұрын
Hello, I tried using a different computer twice as well, but the problem persisted.
@xlipdev10 күн бұрын
could you create an issue in the repo with the scripts you try to run and the details, lemme check and help
@xlipdev10 күн бұрын
btw can you try the path like ex: input_image_path = r".\images\some_frame.jpg" and adjust your path for the file or folder
@mukeshbabu709210 күн бұрын
@@xlipdev NameError: name 'image' is not defined PS C:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master> & "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/.venv/Scripts/python.exe" "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/face_mesh/create_single_sample_and_display.py" [ WARN:0@1.867] global loadsave.cpp:241 cv::findDecoder imread_('images/0001.jpg'): can't open/read file: check file path/integrity Traceback (most recent call last): File "c:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master\face_mesh\create_single_sample_and_display.py", line 37, in image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.10.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:196: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
@eightriceАй бұрын
can we make this work in real time from camera feed?
@xlipdevАй бұрын
Very good question! Technically yes but I didn't care about optimization initially in the scripts, probably it will require adjustments to create depth maps faster. And i also have never tried to create performance from camera feed before, it can be achieved but also need to check how the current pipeline works for camera feed
@salmanbasir7213Ай бұрын
hi The terminal part comes with this error and does not run ModuleNotFoundError: No module named 'cv2'
@xlipdevАй бұрын
@@salmanbasir7213 have you installed the requirements mentioned in README ? opencv-python provides the cv2 module, it should work once you get that package
@bluedott_vfx2 ай бұрын
Thank you bro....I really needed this for my next project, and I don't have an iPhone. Thank you again. Can you tell me how to open the python file. I don't have any app to run it
@xlipdev2 ай бұрын
you need a python interpreter, just go to python (www.python.org) and install it, you can use PIP as a package manager and install required packages for that repo(not many) and then run it (from command line or IDE), probably i will update the repo with readme file soon 👍
@bluedott_vfx2 ай бұрын
@@xlipdev thank you for the help
@xlipdev2 ай бұрын
I created the README installation usage guide in the repo. If you have questions you can always reach me out👍
@bluedott_vfx2 ай бұрын
@@xlipdev thank you so much
@soumyakushwaha74062 ай бұрын
Hello, Nice video you explained really well, getting this error "Not enough background threads available: required 10, available 8. The MetaHuman pipeline is going to run on a single thread ". Do I have to clear it in blender because on internet some guys are saying that you have to clear every thing in blender other than face. Is this the issue.
@xlipdev2 ай бұрын
@@soumyakushwaha7406 When are you getting this? From metahuman pipeline during 'prepare for performance' or during mocap 'process'? It shouldn't be required to clear everything Unreal smart enough to find face landmarks from the image if the image is clear, seems like you need to set some property for unreal metahuman pipeline to use how many threads it should use
@soumyakushwaha74062 ай бұрын
@@xlipdev in mocap process
@xlipdev2 ай бұрын
@@soumyakushwaha7406Did you check minimum device requirements for metahumans from Unreal Engine? Otherwise you might try closing some services which uses significant CPU during process maybe it helps to create enough thread pool 🤔
@soumyakushwaha74062 ай бұрын
@@xlipdev ya it is showing something like 8 core cup, 32 gb ram and 8 gb vram , I have 6 core cpu 16 gb ram and 8 gb vram
@soumyakushwaha74062 ай бұрын
But in KZbin people with low spec are creating that metahuman stuff.
@bluedott_vfx2 ай бұрын
You just hacked UE5 bro!!!
@xlipdev2 ай бұрын
Someone has to show Epic guys that this is possible I think, they are so stubborn about not supporting android 😆
@bluedott_vfx2 ай бұрын
@@xlipdev yes exactly. you discovered a new way bro. Hats off
@황호준-c6u24 күн бұрын
hii ModuleNotFoundError: No module named 'cv2'??? what that???
@xlipdev24 күн бұрын
@@황호준-c6u have you installed the requirements in README?