Пікірлер
@Sarahmcwilliams1982
@Sarahmcwilliams1982 8 күн бұрын
💐 she was a beautiful woman one of the best so sad what happened to her From Sarah McWilliams
@emz-x2n
@emz-x2n 21 күн бұрын
It’s so cute!Nice work!I love it!
@Alberto-r8c
@Alberto-r8c Ай бұрын
Amo le donne che amano la vita complimenti non mollate 😊😊
@shiyongchen9856
@shiyongchen9856 Ай бұрын
你好,可以展开看看节点里面的细节吗,谢谢
@halimgarciagendis248
@halimgarciagendis248 Ай бұрын
Genial!!!
@prasaduchil007
@prasaduchil007 2 ай бұрын
Bro so it's cool and wonderful but I can't find the way to export depth y map prosess from nuke cane u explain it clearly because it's hard to understand if all nodes are not visible to create form over won
@MotuDaaduBhai
@MotuDaaduBhai 2 ай бұрын
He is not going to explain that easily. You have to do trial and error to figure it out 😂
@lakmal1st
@lakmal1st 2 ай бұрын
love it ...
@marc1137
@marc1137 2 ай бұрын
..thanks 🙂
@GiantVFX
@GiantVFX 2 ай бұрын
This is super cool but I see a lot of ethical problems with this.
@uknowverse
@uknowverse 2 ай бұрын
Love from Korea.
@marc1137
@marc1137 2 ай бұрын
sometimes i like go jeju island to relax
@uknowverse
@uknowverse 2 ай бұрын
It is almost a torture that there aren't any course made by you
@amineroula
@amineroula 2 ай бұрын
Thank you ❤❤❤
@WesleySales1
@WesleySales1 2 ай бұрын
Thank you very much for this!
@elartistadelosfamosos
@elartistadelosfamosos 2 ай бұрын
Amazing and brilliant workflow. Thanks a lot
@ryuteam
@ryuteam 2 ай бұрын
Thank you Marc ! You know you are the man and the goat !
@incrediblesarath
@incrediblesarath 2 ай бұрын
Thank you!
@MotuDaaduBhai
@MotuDaaduBhai 2 ай бұрын
I have figured out a way do all of these without using Keentools or Nuke. Keentools help with videos where too much head rotation is involved but once the output video from KT is inserted in my tools, it breaks down the video in jpg and depth file for processing. Your hints helped along the way to figure out some stuff. Thanks for this video.
@marc1137
@marc1137 2 ай бұрын
can use whatever system , just need a rgb and depth Y ... maybe soon AI will do this and epic games will include this custom video option by default
@MotuDaaduBhai
@MotuDaaduBhai 2 ай бұрын
@@marc1137 What version of Nuke you are using?
@marc1137
@marc1137 2 ай бұрын
@@MotuDaaduBhai can use whatever version 13,14,15 , nothing special in the workflow
@ernestorodriguez4742
@ernestorodriguez4742 2 ай бұрын
Hello Marc @marc1137 - Great job as always! I also have a question if you don't mind. I already found a practical way to generate consistent depth maps and use them in metahuman animator. My main problem is that after loading them the normalized values of my depth map are in a range that the MHA is not expecting and makes the face scaled 3 or 4 times in the depth axis (Z). I am using a MySlate_# folder created by my iPhone and replacing the Depth_Frames exr files, but I don't find a how to make the depth* map to fit the exact white to black range that it is expecting. Would you mind helping me find a solution, I am guessing you came through all these already. Thank you for posting your inspiring work in this channel, you should have more subscribers.
@marc1137
@marc1137 2 ай бұрын
normal map? anyways the whites should be greater than 1 , i think soon i will do some video covering some more details about last steps
@ernestorodriguez4742
@ernestorodriguez4742 2 ай бұрын
@@marc1137 Thank you Marc for your reply. I meant depth* map (sorry). Let me explain better. I get depth/height map in black and white 32 from AI, temporally consistent, and very close to the original 3D shape. These sequences of depth maps are in 32bits and I have to convert them to EXR format using a python script. When I do this I "normalize" the range of values form de 0 to 1 space. In this case I am not referring to normal maps (sorry for the confusion). This means that the lowest point will be represented by black color (0) and de highest point, closer to the camera, will be represented by white color (1) in the 0 to 1 color space of float 32 bits in the EXR format. Also, I invert the range because the iPhone uses inverted depth maps (1 farther form the camera and 0 closest to the camera). The problem is that MHA is reading these depth maps in a way that is stretching them in the MHA viewport and makes them unusable inside the MHA plugin in UE5. If you think you could share how to do this final step I will appreciate it. You could also contact me at register.online.100 at gmail It seems that this step works for you, but I am doing wrong with the format that I can not figure out. How do you handle this particular range of values in the depth map and the EXR 32 bits format? Thank you again for reading this and for your kind reply.
@carsonburnett1033
@carsonburnett1033 2 ай бұрын
Wow
@silkevennegeerts7018
@silkevennegeerts7018 3 ай бұрын
Silke Vennegerts. Dieterwergel ❤😅
@mariagraziacarrara1311
@mariagraziacarrara1311 3 ай бұрын
😂😂😂😂
@vicold6264
@vicold6264 3 ай бұрын
marry me
@휘그루트
@휘그루트 3 ай бұрын
perfect! I have a question. Is it possible to transform the target model only if it is identical to the metahuman topology?
@marc1137
@marc1137 3 ай бұрын
always need a metahuman topology matching your custom model , but then could keep the custom topology model to export ,metahuman topology is needed to transfer skin weights and some stuff
@POKATILO_3D
@POKATILO_3D 3 ай бұрын
hello, can you please help me with understanding how convert exr image with only Y channel, so unreal can read it.. i search everythere.. but cant understand how you do this..
@marc1137
@marc1137 3 ай бұрын
congratulations, but channel thing i think is just convert ur new exr to Y with a shuffle node or similar thing in whatever app like nuke or AE
@POKATILO_3D
@POKATILO_3D 3 ай бұрын
@@marc1137 thank you so much, really work with shuffle in Nuke, and thank you for giving idea what metahuman can use without iphone)
@TEM1
@TEM1 3 ай бұрын
No to this channel
@albertpxl
@albertpxl 4 ай бұрын
That's really impressive, well done!
@kazioo2
@kazioo2 4 ай бұрын
Or go even more crazy: do the cloth simulation in Unreal. BTW Epic apparently has a full 4D facial scan data ML training workflow. They mentioned several levels of complexity for advanced metahumans - 1 scan, 50 scans and 50,000 scans (ML rig, MH4D) - "Pushing Next-Gen Real-Time Technology in Marvel 1943: Rise of Hydra | GDC 2024".
@marc1137
@marc1137 4 ай бұрын
for now i want try build some fake muscle model under the skin to have more collision sliding things going on , and my point is do the things the more simple possible that could be scripted to automate the process for whatever head , then see how could export to UE
@error-user-not-found
@error-user-not-found 4 ай бұрын
Really impressive work. Did you use Maya dynamics to simulate sticky lips as well?
@marc1137
@marc1137 4 ай бұрын
yes using the normal nCloth stuff , old tech but very good , and since just need some subtle movements for now , using 3 steps very low for faster simulation, next step is try add some real geometry under the face to act as muscles colliders
@error-user-not-found
@error-user-not-found 4 ай бұрын
Amazing. I'll look forward to it. Keep up the good work!
@wolfyrig1583
@wolfyrig1583 4 ай бұрын
Is it similar at what Unreal didi with the Matrix project ? Using ML deformer?
@marc1137
@marc1137 4 ай бұрын
this is Maya dynamics , not ML , but its the thing to try soon , export all this to UE using this tech
@wolfyrig1583
@wolfyrig1583 4 ай бұрын
@@marc1137 Cool, let us know if it export well! Maya 2025 Announced ML deformer , maybe it can ease the export to ML unreal.
@bad_uncle
@bad_uncle 4 ай бұрын
@@wolfyrig1583 Autodesk has said the ML deformer should only be used for animation purposes. They suggest going back to the original dynamics for final render. This implies that ML deformer isn't top quality enough to use for display purposes.
@NostraDavid2
@NostraDavid2 4 ай бұрын
Wild, that we are able to extract information from an old video and apply it to a 3D model. Amazing!
@marc1137
@marc1137 4 ай бұрын
lot of previous tests showing part of the process
@misticalagesdennix
@misticalagesdennix 4 ай бұрын
1:53
@Vnik_Ai
@Vnik_Ai 4 ай бұрын
this is great! this is what I need! Is it possible to discuss cooperation in telegram?
@marc1137
@marc1137 4 ай бұрын
hi thanks but i dont have telegram , can send me some email but to be honest i dont have much free time to work freelance
@Vnik_Ai
@Vnik_Ai 2 ай бұрын
@@marc1137 Is it possible to buy a guide from you?
@Vnik_Ai
@Vnik_Ai 2 ай бұрын
Hello?
@Silentiumfilms007
@Silentiumfilms007 4 ай бұрын
Tutorial?
@tacotronbrito
@tacotronbrito 5 ай бұрын
Did you modify the MetaHuman expressions or leave them exactly as they are after processing in the Unreal Engine (MetaHuman Animator)?
@marc1137
@marc1137 5 ай бұрын
i have my own tool to modify original metahuman in maya , i use double resolution and custom blendshapes , one of the points i want to try is metahuman look like a 4d scan the most possible
@ernestorodriguez4742
@ernestorodriguez4742 2 ай бұрын
@@marc1137 If you make it look like a 4D scan then you could use it to train a neural network without having to use real 4D data and it will generalize well when the networks sees actual 4D.
@amineroula
@amineroula 5 ай бұрын
amazing man, really cleaver :)
@marc1137
@marc1137 5 ай бұрын
thank you! even if probably not many people will take a look, im a bit the anti social media style... 😅
@amineroula
@amineroula 5 ай бұрын
@@marc1137 i am curious if you ever tried face tools from reallusion?
@andreybocharov5939
@andreybocharov5939 5 ай бұрын
Cool and very interesting how you got the depth map from the video. Can I get a hint?)
@marc1137
@marc1137 5 ай бұрын
i just replied the same in other comment , and some older videos i show main process tracking the face in nuke
@wolfyrig1583
@wolfyrig1583 5 ай бұрын
NIce, wich ML do you use for depth estimation? I've been using MIDAS, but it's really slow
@marc1137
@marc1137 5 ай бұрын
i track the footage in nuke with keen tools so since theres a 3d face there moving similar to the video i can export the depth
@wolfyrig1583
@wolfyrig1583 5 ай бұрын
@@marc1137 oh super smart! I'll try it out
@TinNguyen-dg4zv
@TinNguyen-dg4zv 5 ай бұрын
Nice work!
@marc1137
@marc1137 5 ай бұрын
thanks!
@bad_uncle
@bad_uncle 5 ай бұрын
My brotha!! Your WeChat, please. We need to talk!
@pokerluffy9839
@pokerluffy9839 5 ай бұрын
i am trying to do the same thing like you did but with a cat, I have the 3d cat done and I can not zrwap it to transfer its topology, it does not work.. Did you sculpt the lion head from the meathuman head?
@marc1137
@marc1137 5 ай бұрын
zwrap or face form russian software can do that , but even if its like magic softwares , theres a moment need some manual work to fit more extreme shapes
@pokerluffy9839
@pokerluffy9839 5 ай бұрын
how did you atach hair to the head?
@marc1137
@marc1137 5 ай бұрын
hair done in maya ,export .abc and in UE just attach it in the normal way create binding to the skeleton
@violainemannequin-pr2yz
@violainemannequin-pr2yz 5 ай бұрын
J'adore les paroles je lai écouté en français elle est triste mais bien les sont bien choisi elle n'a pas de chance. Et en plus sont bonneur ne reviens pas je fais plusieurs ❤❤❤❤❤❤ pour monter mon estime paix a elle Alain
@Rhizomatous
@Rhizomatous 6 ай бұрын
Pretty realistic, however when you look at the lips more closely theres an element of touch missing. Meaning imagine if you clap your hands, you can tell the sound came from your hands because yo saw your hands tough and the momentum and impact of your skin , and that small reaction of your skin moving and tiny tiny of pink and then pale. When you move your lips you lips change color woth force and there is a slight change in shape when they touch which we notice.
@marc1137
@marc1137 6 ай бұрын
one effect im trying to achieve is the famous sticky lips that in maya could do it in some dynamic way , but in UE the controls in metahuman is not enough so im thinking to try some blendshape tricks but not sure yet , and need try better the shader with all those masks controlling the 3 main textures , but not much time , just hope next one will look better
@nickb2953
@nickb2953 6 ай бұрын
Dope
@PhilipAnderson2339
@PhilipAnderson2339 6 ай бұрын
I question why youtube brought me here
@marc1137
@marc1137 6 ай бұрын
Men of culture we meet again
@tobiasdockal2725
@tobiasdockal2725 6 ай бұрын
So we can wait another videos ? are you back from hollidays ? :P
@marc1137
@marc1137 6 ай бұрын
i never know when its the last one , i dont post just to keep the channel alive... and these videos are done in some kind of hurry free time between company projects....... future is always unpredictible.........
@cameronlee9327
@cameronlee9327 6 ай бұрын
Great! Go on, the eyes need some blinks
@marc1137
@marc1137 6 ай бұрын
You can say that to jack nicholson.. its his tracked performance 🤦🏻
@simsima9111
@simsima9111 6 ай бұрын
Hi Marc Excellent work! When do you think you the product will be available?
@marc1137
@marc1137 6 ай бұрын
for now never since the support is 0 and as i said many times i wont share what i have in the "dirty" way i have..... so its kind of personal learning development that never finish , always something to update to fix to test........
@jaimedominguez3701
@jaimedominguez3701 6 ай бұрын
Que miedo 😮
@MindTitanAcademy
@MindTitanAcademy 6 ай бұрын
Hello! Nice work I have one question, how you transfered Horse to metahuman?
@alireza_salimian
@alireza_salimian 6 ай бұрын
🤩 wow