A fresh, updated video guide on using the repository with new features is coming soon... stay tuned! If you're interested, you can check out the editor plugin video here: kzbin.info/www/bejne/r2TbiGxpnbmJorc
@rklehm2 ай бұрын
I use a similar pipeline than yours... I never really understood why Unreal don't do this as default, it is so simple and just works.
@xlipdev2 ай бұрын
Great! Nice to see I reached the people who is playing with it as well ^^ and exactly! that was my point 😅
@amirhasani64013 ай бұрын
Bro, I usually don't comment or like on KZbin, but you are really a genius If I could, I would personally find your location and thank you Good Luck
@xlipdev3 ай бұрын
Thanks a lot man for the kind words ^^ I'm glad if I helped in any way ^^
@JantzenProduktionsАй бұрын
You Changed my live! Thank you sooooooooooooooooooo much! I wanted to do this for so long!
@JantzenProduktionsАй бұрын
If you make a plugin for 5 buck i would drive to mark zuckerberg and proof he is an alien
@xlipdevАй бұрын
You are welcome 😅 Seems like plugin thing will take time 🥲 I'm still discovering how to make it work with Unreal's integrated python and i didn't write many plugins for Unreal 😆
@hoseynheydari2901Ай бұрын
@@xlipdev DUDE 20$ is TOO MUCH LOWER THE PRICE OR MAKE IT FOR FREE DONT BE AN ASSHOLE YOULL GET ALOT OF MONEY WHEN YOUR CHANNEL GOES VIRAL FOR THIS PLUGIN THAT YOU REALESED LAST NIGHT STOP SELLING IT FOR UNREASENBLE PRICES MAKE IT FREE
@squeezypixels3 ай бұрын
Thanks for the video. As I see this gives the best results for non iPhone videos.
@PaulGriswold12 ай бұрын
Is there anything different/special unusual about the depth maps? Could I literally use Davinci Resolve's depth map extractor with video to do the same thing?
@xlipdev2 ай бұрын
@@PaulGriswold1 for Unreal editor, depth data should be .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) So unfortunately getting directly from an app is not going to work most likely. But still you can edit your depth map to match these things and it should work.
@lemn116519 күн бұрын
Hi, in minute 6:22 I cant find the metahuman animator capture data thing, please help
@xlipdev19 күн бұрын
@@lemn1165 You need to enable official Metahuman Plugin from Epic Games to create these assets 👍
@lemn116519 күн бұрын
@@xlipdev Thanks a lot, you're a good human being
@mukeshbabu709211 күн бұрын
Hi, I'm having some trouble. The Metahumun Identity panel is blank when I open it, even though I followed all the steps, but it doesn't function. Is there a specific issue? If so, please let me know how to fix it.
@xlipdev11 күн бұрын
You're not providing much detail, my friend 😅. There could be many things that went wrong. Start by checking if the RGB and depth frames exist in the folders using a file explorer. Then, review your image sequences to ensure they open correctly, have the right dimensions, and display the expected FPS values. Finally, inspect your footage thoroughly.
@cjytber985725 күн бұрын
Bro, after following your instructions, I finally made it! You’re a genius! Thank you very much! I’d like to ask if you plan to develop a feature that allows Android phones to use Live Link as well?
@xlipdev25 күн бұрын
Ooh, that’s great to hear! Enjoy your depth frames 🥳🥳 Thanks for the kind words ^^ Actually, I never thought about making an Android app, but it sounds like a really good idea tbh! Maybe phones aren’t strong enough to generate all the depth frames on the device, but I’ll definitely think about it maybe processing on the backend with some AIs to perfect them! I’ll definitely consider it, thanks for the idea!
@cjytber985724 күн бұрын
@@xlipdev That’s awesome! I sincerely hope you succeed and release it on Fab! I’d be happy to pay to support your plugin! I've already subscribed to your channel and will be waiting for you to publish your plugin on Fab!
@cjytber985724 күн бұрын
@@xlipdev That's awesome! I truly hope you succeed and release it on Fab! I’d be more than happy to pay to support your plugin! I've already subscribed to your channel and will be waiting for you to publish your plugin on Fab!
@xlipdev24 күн бұрын
@@cjytber9857 Great thanks for the support man! I’m hoping to get approval as well 😅the process is in still review and I share updates on how it goes ^^
@MarchantFilms-ef1dq2 ай бұрын
Amazing, thanks for sharing this process! I was wondering, can we use a depth map generated from other sources? Davinci Resolve can generate a depth map, and there are AIs even for free than can generate depth maps from image or video, do we need to convert this depth maps or can we use them directly with this process?
@xlipdev2 ай бұрын
@MarchantFilms-ef1dq for Unreal editor, depth data should be in .exr format and depth data should be written in "Y" channel and also depth values have to be between some ranges depending on the device class you choose in calibration.(iPhone 14 or later expects somewhere between 15 and 40 for the face area) also you need to double check 0 and infinite values and arrange them, So unfortunately getting directly from an app is not going to work most likely. But still you can edit/modify your depth map to match these things and it should work. I initially started with midas depth generator AI model to create some depth maps but it didn't go well so I decided to create them myself 😅
@caiyinke34042 ай бұрын
This is a very, very, very good tutorial!! By the way, I have one question: How is the head rotation animation capture performance?
@caiyinke34042 ай бұрын
Hi, after I drag and drop the calibration file this error pops out. Warning: Failed to import 'D:\Projects\YQ\ABCD\B_CHAOMO\faceDepthAI-master\faceDepthAI-master\iphone_lens_calibration\Calibration.mhaical'. Unknown extension 'mhaical'. i also run the python pip install -r requirements.txt and got this : Successfully installed CFFI-1.17.1 absl-py-2.1.0 attrs-24.2.0 contourpy-1.3.0 cycler-0.12.1 flatbuffers-24.3.25 fonttools-4.54.1 jax-0.4.34 jaxlib-0.4.34 kiwisolver-1.4.7 matplotlib-3.9.2 mediapipe-0.10.14 ml-dtypes-0.5.0 numpy-2.1.2 opencv-contrib-python-4.10.0.84 opencv-python-4.10.0.84 openexr-3.3.1 opt-einsum-3.4.0 packaging-24.1 pillow-10.4.0 protobuf-4.25.5 pycparser-2.22 pyparsing-3.1.4 python-dateutil-2.9.0.post0 scipy-1.14.1 six-1.16.0 sounddevice-0.5.0 trimesh-4.4.9
@xlipdev2 ай бұрын
@@caiyinke3404 Thanks ^^ Head is also tracked during performance by default but you can disable i believe, there are many vidoes around how to adjust the body with the head you can check, Just drag one metahuman to your level first and Unreal Engine will enable required plugins initially, probably some metahuman related plugins are not enabled in engine so thats why you cant import 'mhacial'
@juanmaliceras2 ай бұрын
Wow! Really useful feature!
@xlipdev2 ай бұрын
Yea, Unreal is getting stronger ^^
@adomyip2 ай бұрын
Thanks for sharing this great method and the scripts!
@xlipdev2 ай бұрын
You're very welcome!
@eightrice2 ай бұрын
what about including body animation from a separate camera so that we could have a full-body performance?
@xlipdev2 ай бұрын
Good idea! There are many apps/AIs that you can capture body animation even for free, so yea that is also possible ^^
@paperino02 ай бұрын
Check out "The Darkest Age"'s mocap tutorial. he uses 2 cameras with a iphone head mount for facial mocap and regular video with moveAI's video2mocap app and records simultanously. you could combine both tutorials to get full mocap with android (or any video source)
@ivonmorales26542 ай бұрын
And of course you should create the plugin, many of us would buy it!
@xlipdev2 ай бұрын
@@ivonmorales2654 I will try then ^^
@hoseynheydari2901Ай бұрын
IT SHOULD BE FREE TO USE NOT FOR SELL
@matttgaminghd377Ай бұрын
Is there a way that if during processing the depth maps it crashes that I can simply restart where it left off instead of restart from the beginning? I'm processing something with a lot of frames, also how do i stop it crashing mid conversion, it seems to be gradually using more memory as it goes until it crashes and happens even if i have nothing open, it happens after about 2,500 frames
@xlipdevАй бұрын
You can break down a large performance into smaller chunks, process them individually, and then merge them in the sequencer. Additionally, in the capture data, there's an option something like "ignored frames array," which allows you to skip any problematic frames
@matttgaminghd377Ай бұрын
@@xlipdevthat is what I’m thinking but I’m wondering if there’s a fix to prevent it running out of memory seen as these videos I’m processing have like 25000 frames, and is there a way I can make it skip the frames that are already done
@matttgaminghd377Ай бұрын
@@xlipdev The error I got was "Unable to allocate 7.03MiB for an array with shape (1280, 720) and data type float64"
@xlipdevАй бұрын
@@matttgaminghd377 that's a really good point actually, you can do the separation manually chunk by chunk converting to the depth frames and then create footages for each chunk but I will add this feature to repo and plugin 👍
@matttgaminghd377Ай бұрын
@@xlipdevIs there a way to prevent the out of memory issue, the weird thing is it doesn’t seem to be using 100% cpu, I actually tried having 4 processes of different chunks and it didn’t take up much memory but they all crashed at the same time
@One1ye197 күн бұрын
thanks for the video i am getting error in 12:26 ValueError: string is not a file: face_model_with_iris.obj
@xlipdev7 күн бұрын
Thanks ^^ you need to be in face mesh directory before running that script. Simply go with 'cd ./face_mesh' first and then try running
@One1ye197 күн бұрын
@@xlipdev im sry if this is obv but im really new to this stuff but i did this cd C:\Users\Any\OneDrive\Desktop\faceDepthAI-master\faceDepthAI-master\face_mesh> then i ran the command python create_single_sample_and_display.py still the same issue
@xlipdev6 күн бұрын
@@One1ye19 No worries, but yea this repo requires some knowledge about coding and development, you can jump here discord.gg/r4xj4hsk I will try to help ^^
@josiahgil2 ай бұрын
Can this also work with neck movements? Thank you for this informative tutorial.
@xlipdev2 ай бұрын
@@josiahgil You are welcome ^^ Yes, head movement is also tracked by default during performance. Here is an official tutorial about how to adjust neck/head movement into body if you are looking for it kzbin.info/www/bejne/hJzFZXd7pL-ShLssi=d8KGYS_x5-vRq1g_&t=1731
@josiahgil2 ай бұрын
@@xlipdev thanks, I should've been more clear about what i meant, i meant neck flexing, like when speaking the neck stretch, neck flex, throat inhale, etc
@xlipdev2 ай бұрын
@@josiahgil oh I see, in metahuman skeleton there are not many bones in neck area (I think 2 or 3), during facial performance I believe only head bone is tracked, so capturing precise neck movements seems not possible initially, but you can have additional bones in that area and animate them correspondingly with your facial animation by yourself ^^
@josiahgil2 ай бұрын
@@xlipdev thank you🙏
@AteruberАй бұрын
Amazing! Does this only work on RTX video cards or can it also work on GTX?
@xlipdevАй бұрын
@@Ateruber I believe there is no restriction for GPUs overall for metahuman performances, it should work
@salmanbasir72132 ай бұрын
hi The terminal part comes with this error and does not run ModuleNotFoundError: No module named 'cv2'
@xlipdev2 ай бұрын
@@salmanbasir7213 have you installed the requirements mentioned in README ? opencv-python provides the cv2 module, it should work once you get that package
@original92 ай бұрын
@xlipdev i got this error whilst trying to install any ideas? -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for openexr Failed to build openexr ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (openexr)
@xlipdev2 ай бұрын
Seems like you have issues with cmake, make sure that CMake is installed and available in your system's PATH maybe you need to update it, or maybe issues is related to one of below You don't have a compatible C++ compiler (such as gcc for Linux or MinGW for Windows) openexr may not be compatible with the version of Python you're using, update python
@ivonmorales26542 ай бұрын
From the moment I saw the first second of the video, I was hooked. I will apply what you have shown. And since it's proven that you have the talent, do you think this could be done for full-body animations? Microsoft published something similar. I leave you the links in case you are interested. Thanks for your contribution... KZbin won't let me put the links but I'll give you the title, Look Ma, no markers Holistic performance capture without the hassle ACM Transactions on Graphics
@xlipdev2 ай бұрын
@@ivonmorales2654 Many thanks for the info and kind words ^^ I will definitely check those, there are many apps and AIs for body motion capture even for free, I think Nvidia is also doing something about it, if you have an iPhone, life is easier for facial capture for now, so yea capturing full body simultaneously is not a big deal, here is an example which is shared by @paperino0 here in comments and looks nice kzbin.info/www/bejne/fKGWgWOqiNOMY7s , but no ihpone no problem still you can use this video's pipeline for facial capture ^^
@matttgaminghd377Ай бұрын
I guess if the video is not 720x1280 I have to change the dimensions in the calibration
@xlipdevАй бұрын
I don't recommend altering the calibration data, as you'd then need to adjust your depth map generation parameters accordingly, which could be tricky. Instead, resize your video frames to 720x1280 during extraction, as the generated depth frames will match at 360x640
@madzorojuro2 ай бұрын
Good video you have earned yourself a subscriber
@xlipdev2 ай бұрын
@@madzorojuro thank youu ^^ you are the 100th one 🥳
@tvgestaltungАй бұрын
Hi, your work is very impressive. I’m having trouble importing the file Calibration.mhaical into Unreal Engine 5.4. Error: unknown extension. Do you have a solution for this issue? Thank you very much!
@xlipdevАй бұрын
Thanks ^^ You need to enable Epic Games - MetaHuman plugin to import it.
@tvgestaltungАй бұрын
@@xlipdev Thank you very much for the super quick and correct answer. I was apparently too tired yesterday to realize it, as I thought I had already turned it on.
@mukeshbabu7092Ай бұрын
I'm having this problem. Please assist me in resolving this problem. error: OpenCV(4.10.0) ... error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor' This means that the cv2.cvtColor() function is being called on an empty image (_src.empty() is true). It indicates that OpenCV couldn’t read the image properly.
@xlipdevАй бұрын
This seems like an easy one, probably you didn't set a correct image path for the script please double check "input_image_path" (if you are trying to display) or "input_folder" (if you are trying to convert all images in that folder) and make sure you have'.png', '.jpg', '.jpeg' files inside that folder
@mukeshbabu7092Ай бұрын
Hello, I tried using a different computer twice as well, but the problem persisted.
@xlipdevАй бұрын
could you create an issue in the repo with the scripts you try to run and the details, lemme check and help
@xlipdevАй бұрын
btw can you try the path like ex: input_image_path = r".\images\some_frame.jpg" and adjust your path for the file or folder
@mukeshbabu7092Ай бұрын
@@xlipdev NameError: name 'image' is not defined PS C:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master> & "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/.venv/Scripts/python.exe" "c:/Users/Mukesh Babu/Documents/GitHub/New One/faceDepthAI-master/face_mesh/create_single_sample_and_display.py" [ WARN:0@1.867] global loadsave.cpp:241 cv::findDecoder imread_('images/0001.jpg'): can't open/read file: check file path/integrity Traceback (most recent call last): File "c:\Users\Mukesh Babu\Documents\GitHub\New One\faceDepthAI-master\face_mesh\create_single_sample_and_display.py", line 37, in image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.10.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:196: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
@berriebuilds53102 ай бұрын
Actually I don't understand the code compiler part, do I also need a compiler to work because I don't know code,.. how do I create the depth maps Please help me mate.. love this video
@xlipdev2 ай бұрын
Yea the process is a bit manual for now, I will take time to convert into a plugin later on, basically you need python to run these scripts and that shouldn't be that hard you can follow the instructions in the repo README, if you have issues I can help
@satyaa999Ай бұрын
unable to drag and drop mhaical calibration file into unreal engine, it is giving unknown file extension error, does anyone have solution for this?
@xlipdevАй бұрын
@@satyaa999 you must enable metahuman plugin
@soumyakushwaha74063 ай бұрын
Hello, Nice video you explained really well, getting this error "Not enough background threads available: required 10, available 8. The MetaHuman pipeline is going to run on a single thread ". Do I have to clear it in blender because on internet some guys are saying that you have to clear every thing in blender other than face. Is this the issue.
@xlipdev3 ай бұрын
@@soumyakushwaha7406 When are you getting this? From metahuman pipeline during 'prepare for performance' or during mocap 'process'? It shouldn't be required to clear everything Unreal smart enough to find face landmarks from the image if the image is clear, seems like you need to set some property for unreal metahuman pipeline to use how many threads it should use
@soumyakushwaha74063 ай бұрын
@@xlipdev in mocap process
@xlipdev3 ай бұрын
@@soumyakushwaha7406Did you check minimum device requirements for metahumans from Unreal Engine? Otherwise you might try closing some services which uses significant CPU during process maybe it helps to create enough thread pool 🤔
@soumyakushwaha74063 ай бұрын
@@xlipdev ya it is showing something like 8 core cup, 32 gb ram and 8 gb vram , I have 6 core cpu 16 gb ram and 8 gb vram
@soumyakushwaha74063 ай бұрын
But in KZbin people with low spec are creating that metahuman stuff.
@ShinjiKeiVR10BetaUSA-s2t2 ай бұрын
You are amazing! I am making a 3D animation movie using my Metahuman. Someday I need your help.
@xlipdev2 ай бұрын
@@ShinjiKeiVR10BetaUSA-s2t Very cool! I hope my video helps ^^ You can always reach me out from repo social media etc. I can help ^^
@HatipAksünger2 ай бұрын
Great idea thanks for the video!
@eightrice2 ай бұрын
can we make this work in real time from camera feed?
@xlipdev2 ай бұрын
Very good question! Technically yes but I didn't care about optimization initially in the scripts, probably it will require adjustments to create depth maps faster. And i also have never tried to create performance from camera feed before, it can be achieved but also need to check how the current pipeline works for camera feed
@bluedott_vfx3 ай бұрын
Thank you bro....I really needed this for my next project, and I don't have an iPhone. Thank you again. Can you tell me how to open the python file. I don't have any app to run it
@xlipdev3 ай бұрын
you need a python interpreter, just go to python (www.python.org) and install it, you can use PIP as a package manager and install required packages for that repo(not many) and then run it (from command line or IDE), probably i will update the repo with readme file soon 👍
@bluedott_vfx3 ай бұрын
@@xlipdev thank you for the help
@xlipdev3 ай бұрын
I created the README installation usage guide in the repo. If you have questions you can always reach me out👍
@bluedott_vfx3 ай бұрын
@@xlipdev thank you so much
@Dongtian-n2n2 ай бұрын
用什么软件把图片转成深度图的
@xlipdevАй бұрын
I use python scripts ^^ I shared the source code in description you can take a look
@avmk718621 күн бұрын
i am having errors with the code, can anyone help me?
@xlipdev21 күн бұрын
You can try my suggestions i mentioned in the comment which are related to yours, If it still doesn't work, feel free to join our Discord. I’ll be happy to help
@avmk718621 күн бұрын
sure, lemme join
@stalkershano19 күн бұрын
🎉🎉 gr8 vid
@xlipdev18 күн бұрын
Thanks a lot I'm happy you liked it ^^
@incrediblesarath2 ай бұрын
Thank you!
@xlipdev2 ай бұрын
@@incrediblesarath You are welcome, I hope I helped ^^
@bsasikff4464Ай бұрын
bro when you are releasing the plugin ????
@xlipdevАй бұрын
@@bsasikff4464 working on it ^^ but seems like it will take some time 🥲
@bluedott_vfx3 ай бұрын
You just hacked UE5 bro!!!
@xlipdev3 ай бұрын
Someone has to show Epic guys that this is possible I think, they are so stubborn about not supporting android 😆
@bluedott_vfx3 ай бұрын
@@xlipdev yes exactly. you discovered a new way bro. Hats off
@황호준-c6uАй бұрын
hii ModuleNotFoundError: No module named 'cv2'??? what that???
@xlipdevАй бұрын
@@황호준-c6u have you installed the requirements in README?