💐 she was a beautiful woman one of the best so sad what happened to her From Sarah McWilliams
@emz-x2n21 күн бұрын
It’s so cute!Nice work!I love it!
@Alberto-r8cАй бұрын
Amo le donne che amano la vita complimenti non mollate 😊😊
@shiyongchen9856Ай бұрын
你好,可以展开看看节点里面的细节吗,谢谢
@halimgarciagendis248Ай бұрын
Genial!!!
@prasaduchil0072 ай бұрын
Bro so it's cool and wonderful but I can't find the way to export depth y map prosess from nuke cane u explain it clearly because it's hard to understand if all nodes are not visible to create form over won
@MotuDaaduBhai2 ай бұрын
He is not going to explain that easily. You have to do trial and error to figure it out 😂
@lakmal1st2 ай бұрын
love it ...
@marc11372 ай бұрын
..thanks 🙂
@GiantVFX2 ай бұрын
This is super cool but I see a lot of ethical problems with this.
@uknowverse2 ай бұрын
Love from Korea.
@marc11372 ай бұрын
sometimes i like go jeju island to relax
@uknowverse2 ай бұрын
It is almost a torture that there aren't any course made by you
@amineroula2 ай бұрын
Thank you ❤❤❤
@WesleySales12 ай бұрын
Thank you very much for this!
@elartistadelosfamosos2 ай бұрын
Amazing and brilliant workflow. Thanks a lot
@ryuteam2 ай бұрын
Thank you Marc ! You know you are the man and the goat !
@incrediblesarath2 ай бұрын
Thank you!
@MotuDaaduBhai2 ай бұрын
I have figured out a way do all of these without using Keentools or Nuke. Keentools help with videos where too much head rotation is involved but once the output video from KT is inserted in my tools, it breaks down the video in jpg and depth file for processing. Your hints helped along the way to figure out some stuff. Thanks for this video.
@marc11372 ай бұрын
can use whatever system , just need a rgb and depth Y ... maybe soon AI will do this and epic games will include this custom video option by default
@MotuDaaduBhai2 ай бұрын
@@marc1137 What version of Nuke you are using?
@marc11372 ай бұрын
@@MotuDaaduBhai can use whatever version 13,14,15 , nothing special in the workflow
@ernestorodriguez47422 ай бұрын
Hello Marc @marc1137 - Great job as always! I also have a question if you don't mind. I already found a practical way to generate consistent depth maps and use them in metahuman animator. My main problem is that after loading them the normalized values of my depth map are in a range that the MHA is not expecting and makes the face scaled 3 or 4 times in the depth axis (Z). I am using a MySlate_# folder created by my iPhone and replacing the Depth_Frames exr files, but I don't find a how to make the depth* map to fit the exact white to black range that it is expecting. Would you mind helping me find a solution, I am guessing you came through all these already. Thank you for posting your inspiring work in this channel, you should have more subscribers.
@marc11372 ай бұрын
normal map? anyways the whites should be greater than 1 , i think soon i will do some video covering some more details about last steps
@ernestorodriguez47422 ай бұрын
@@marc1137 Thank you Marc for your reply. I meant depth* map (sorry). Let me explain better. I get depth/height map in black and white 32 from AI, temporally consistent, and very close to the original 3D shape. These sequences of depth maps are in 32bits and I have to convert them to EXR format using a python script. When I do this I "normalize" the range of values form de 0 to 1 space. In this case I am not referring to normal maps (sorry for the confusion). This means that the lowest point will be represented by black color (0) and de highest point, closer to the camera, will be represented by white color (1) in the 0 to 1 color space of float 32 bits in the EXR format. Also, I invert the range because the iPhone uses inverted depth maps (1 farther form the camera and 0 closest to the camera). The problem is that MHA is reading these depth maps in a way that is stretching them in the MHA viewport and makes them unusable inside the MHA plugin in UE5. If you think you could share how to do this final step I will appreciate it. You could also contact me at register.online.100 at gmail It seems that this step works for you, but I am doing wrong with the format that I can not figure out. How do you handle this particular range of values in the depth map and the EXR 32 bits format? Thank you again for reading this and for your kind reply.
@carsonburnett10332 ай бұрын
Wow
@silkevennegeerts70183 ай бұрын
Silke Vennegerts. Dieterwergel ❤😅
@mariagraziacarrara13113 ай бұрын
😂😂😂😂
@vicold62643 ай бұрын
marry me
@휘그루트3 ай бұрын
perfect! I have a question. Is it possible to transform the target model only if it is identical to the metahuman topology?
@marc11373 ай бұрын
always need a metahuman topology matching your custom model , but then could keep the custom topology model to export ,metahuman topology is needed to transfer skin weights and some stuff
@POKATILO_3D3 ай бұрын
hello, can you please help me with understanding how convert exr image with only Y channel, so unreal can read it.. i search everythere.. but cant understand how you do this..
@marc11373 ай бұрын
congratulations, but channel thing i think is just convert ur new exr to Y with a shuffle node or similar thing in whatever app like nuke or AE
@POKATILO_3D3 ай бұрын
@@marc1137 thank you so much, really work with shuffle in Nuke, and thank you for giving idea what metahuman can use without iphone)
@TEM13 ай бұрын
No to this channel
@albertpxl4 ай бұрын
That's really impressive, well done!
@kazioo24 ай бұрын
Or go even more crazy: do the cloth simulation in Unreal. BTW Epic apparently has a full 4D facial scan data ML training workflow. They mentioned several levels of complexity for advanced metahumans - 1 scan, 50 scans and 50,000 scans (ML rig, MH4D) - "Pushing Next-Gen Real-Time Technology in Marvel 1943: Rise of Hydra | GDC 2024".
@marc11374 ай бұрын
for now i want try build some fake muscle model under the skin to have more collision sliding things going on , and my point is do the things the more simple possible that could be scripted to automate the process for whatever head , then see how could export to UE
@error-user-not-found4 ай бұрын
Really impressive work. Did you use Maya dynamics to simulate sticky lips as well?
@marc11374 ай бұрын
yes using the normal nCloth stuff , old tech but very good , and since just need some subtle movements for now , using 3 steps very low for faster simulation, next step is try add some real geometry under the face to act as muscles colliders
@error-user-not-found4 ай бұрын
Amazing. I'll look forward to it. Keep up the good work!
@wolfyrig15834 ай бұрын
Is it similar at what Unreal didi with the Matrix project ? Using ML deformer?
@marc11374 ай бұрын
this is Maya dynamics , not ML , but its the thing to try soon , export all this to UE using this tech
@wolfyrig15834 ай бұрын
@@marc1137 Cool, let us know if it export well! Maya 2025 Announced ML deformer , maybe it can ease the export to ML unreal.
@bad_uncle4 ай бұрын
@@wolfyrig1583 Autodesk has said the ML deformer should only be used for animation purposes. They suggest going back to the original dynamics for final render. This implies that ML deformer isn't top quality enough to use for display purposes.
@NostraDavid24 ай бұрын
Wild, that we are able to extract information from an old video and apply it to a 3D model. Amazing!
@marc11374 ай бұрын
lot of previous tests showing part of the process
@misticalagesdennix4 ай бұрын
1:53
@Vnik_Ai4 ай бұрын
this is great! this is what I need! Is it possible to discuss cooperation in telegram?
@marc11374 ай бұрын
hi thanks but i dont have telegram , can send me some email but to be honest i dont have much free time to work freelance
@Vnik_Ai2 ай бұрын
@@marc1137 Is it possible to buy a guide from you?
@Vnik_Ai2 ай бұрын
Hello?
@Silentiumfilms0074 ай бұрын
Tutorial?
@tacotronbrito5 ай бұрын
Did you modify the MetaHuman expressions or leave them exactly as they are after processing in the Unreal Engine (MetaHuman Animator)?
@marc11375 ай бұрын
i have my own tool to modify original metahuman in maya , i use double resolution and custom blendshapes , one of the points i want to try is metahuman look like a 4d scan the most possible
@ernestorodriguez47422 ай бұрын
@@marc1137 If you make it look like a 4D scan then you could use it to train a neural network without having to use real 4D data and it will generalize well when the networks sees actual 4D.
@amineroula5 ай бұрын
amazing man, really cleaver :)
@marc11375 ай бұрын
thank you! even if probably not many people will take a look, im a bit the anti social media style... 😅
@amineroula5 ай бұрын
@@marc1137 i am curious if you ever tried face tools from reallusion?
@andreybocharov59395 ай бұрын
Cool and very interesting how you got the depth map from the video. Can I get a hint?)
@marc11375 ай бұрын
i just replied the same in other comment , and some older videos i show main process tracking the face in nuke
@wolfyrig15835 ай бұрын
NIce, wich ML do you use for depth estimation? I've been using MIDAS, but it's really slow
@marc11375 ай бұрын
i track the footage in nuke with keen tools so since theres a 3d face there moving similar to the video i can export the depth
@wolfyrig15835 ай бұрын
@@marc1137 oh super smart! I'll try it out
@TinNguyen-dg4zv5 ай бұрын
Nice work!
@marc11375 ай бұрын
thanks!
@bad_uncle5 ай бұрын
My brotha!! Your WeChat, please. We need to talk!
@pokerluffy98395 ай бұрын
i am trying to do the same thing like you did but with a cat, I have the 3d cat done and I can not zrwap it to transfer its topology, it does not work.. Did you sculpt the lion head from the meathuman head?
@marc11375 ай бұрын
zwrap or face form russian software can do that , but even if its like magic softwares , theres a moment need some manual work to fit more extreme shapes
@pokerluffy98395 ай бұрын
how did you atach hair to the head?
@marc11375 ай бұрын
hair done in maya ,export .abc and in UE just attach it in the normal way create binding to the skeleton
@violainemannequin-pr2yz5 ай бұрын
J'adore les paroles je lai écouté en français elle est triste mais bien les sont bien choisi elle n'a pas de chance. Et en plus sont bonneur ne reviens pas je fais plusieurs ❤❤❤❤❤❤ pour monter mon estime paix a elle Alain
@Rhizomatous6 ай бұрын
Pretty realistic, however when you look at the lips more closely theres an element of touch missing. Meaning imagine if you clap your hands, you can tell the sound came from your hands because yo saw your hands tough and the momentum and impact of your skin , and that small reaction of your skin moving and tiny tiny of pink and then pale. When you move your lips you lips change color woth force and there is a slight change in shape when they touch which we notice.
@marc11376 ай бұрын
one effect im trying to achieve is the famous sticky lips that in maya could do it in some dynamic way , but in UE the controls in metahuman is not enough so im thinking to try some blendshape tricks but not sure yet , and need try better the shader with all those masks controlling the 3 main textures , but not much time , just hope next one will look better
@nickb29536 ай бұрын
Dope
@PhilipAnderson23396 ай бұрын
I question why youtube brought me here
@marc11376 ай бұрын
Men of culture we meet again
@tobiasdockal27256 ай бұрын
So we can wait another videos ? are you back from hollidays ? :P
@marc11376 ай бұрын
i never know when its the last one , i dont post just to keep the channel alive... and these videos are done in some kind of hurry free time between company projects....... future is always unpredictible.........
@cameronlee93276 ай бұрын
Great! Go on, the eyes need some blinks
@marc11376 ай бұрын
You can say that to jack nicholson.. its his tracked performance 🤦🏻
@simsima91116 ай бұрын
Hi Marc Excellent work! When do you think you the product will be available?
@marc11376 ай бұрын
for now never since the support is 0 and as i said many times i wont share what i have in the "dirty" way i have..... so its kind of personal learning development that never finish , always something to update to fix to test........
@jaimedominguez37016 ай бұрын
Que miedo 😮
@MindTitanAcademy6 ай бұрын
Hello! Nice work I have one question, how you transfered Horse to metahuman?