@Ripple Training - what I would love to see is a video that concentrates on a complete workflow from start to finish suitable to film work with a node graph and best practices
@GameSack2 жыл бұрын
Crazy! Does this work with only horizontally-oriented shots, or would it also work if the camera was set to a "dutch angle" or any other angle? I mean I guess I could test it myself, but I'm pretty busy being lazy right now.
@markspencer31042 жыл бұрын
Shouldn't matter
@Hazardteam Жыл бұрын
Subscribed
@JimRobinson-colors2 жыл бұрын
Nice explanation Mark, I have always liked your tutorials over the years. Your experience and pacing is fantastic. I wonder why in this tutorial why you would copy the node to a new node and the reset the Depth Mask instead of just editing it without the reset. I understand that you wanted the exposure changes, but in a really detailed image, I would think it would be easier to do the second adjustment starting from the same perspective. Maybe it makes no difference one way or the other, was just wondering why. I wonder what the AI uses to determine the depth? Just wondering if it analyses the lens depth of field. Knowing how something works under the hood, and in this case if depth of field is a factor, that maybe how it is shot in camera might help or on the other side of the coin, hinder the detail of this effect. Anyway - enjoyed this - hope to be able to see the whole training some day. Cheers.
@markspencer31042 жыл бұрын
Thanks, Jim - I just reset it to start from scratch since I was isolating a new area. Honestly, I would in reality use a layer mixer node but that concept doesn't get introduced until later lesson in the tutorial so I needed to stick with concepts already introduced.
@sergentboucherie2 жыл бұрын
I'd buy the that tutorial but I just bought a new GPU, so it will have to wait. Also, is that Stevus Martinus at 0:28?
@rippleguys2 жыл бұрын
Why yes, it is .
@TimetoTalkwithYevgen2 жыл бұрын
is this feature only in Studio?
@markspencer31042 жыл бұрын
Yes, as explained explicitly in the video. It uses DaVinci Resolve's Neural Engine.
@TimetoTalkwithYevgen2 жыл бұрын
@@markspencer3104 thank you!
@EvanFotis2 жыл бұрын
The depth matte is awesome and mind blowing on how it computes… but I feel it still needs development yet to become more usable. For starters it’s far too processor intensive, my mbp m1max fans blast out when I use it… then it needs ability for finer isolation between planes. Recently tried to key out a subject from background but because both where further in z space, resolve could not fully isolate the person.
@markspencer31042 жыл бұрын
Agreed.
@beargeistdesign51742 жыл бұрын
You're gong to want to see the matte play at full speed: and you're going to want to render it separately under a lot of NR (I think Temporal is what's used in the Deflicker OFX). Secondly, if you have a static camera/ before the subject(s) are stepping into the scene... its best to render that (depth map)frame as a still, then see if you can use the magic mask for the moving subjects. the "depth" attribute is shockingly accurate... but like most things AI, it struggles to reproduce the same image across frames. The map adjustment and depth target settings are generally more destructive (and resource intensive), than using classic grading tools or blending modes for a pseudo dodge and burn (on the mask). There is also a good tut on VFX study that shows you how to create a simple version of those parameters outside of the OFX. Usually subtracting one of the RGB channels (as a mask), a blending mode (on the depth mask), and a regular window will let you further isolate the depth map.