give me time offset node! i neeeeeed it also, brilliant.org/CGMatter/
@pikzerz3 ай бұрын
Auto exposure next
@hoodwinkedfool3 ай бұрын
For reals I’ve been trying this for a while now. You can do a nice setup in Resolve but the dream is to do it all in Blender
@minecraftgenix97193 ай бұрын
Was about to comment this, i guess I'm not the only one
@pikzerz3 ай бұрын
@@hoodwinkedfool All so that my lil doodoo animations can look slightly realistic
@longshot90583 ай бұрын
I remember he made a compositor tutorial for it a while back. Now that we have real time compositing, it works in the viewport in real time.
@FireAngelOfLondon3 ай бұрын
This is gold, a practical tutorial that solves a real animation problem; thank you!
@haydenhoes3 ай бұрын
very smart thinking of using the motion tracking to get it to update in real time. when i built my autofocus, it used simulation nodes, so only updated when the timeline was playing.
@TheDaSilvaMatheus3 ай бұрын
The fastest focus puller in town
@pauliusmscichauskas5583 ай бұрын
This is how I imagine I'd do it: Make an empty a child of the camera and use the shrinkwrap constraint to project it on the scene's surface. At this point, you are most of the way there. Now you need another empty, that smoothly always moves towards the the position of the shrink-wrapped empty. Use that new empty to calculate the distance. I guess this can be done with simulation nodes on a mesh, and moving a single point towards the position of the first empty, and then hooking another empty to that point... Something like that
@vinfinityremakerguy3 ай бұрын
empties dont have modifiers =(
@pauliusmscichauskas5583 ай бұрын
@@vinfinityremakerguy But they have constraints.
@vinfinityremakerguy3 ай бұрын
@@pauliusmscichauskas558 ohhh i didnt see the shrinkwrap constraint before
@ShoryYTP3 ай бұрын
One important thing to note for realism: a 16mm lens wouldn't have almost any depth of field in the real world. Any f-stop value below 2.0 is unrealistic, as even a 1.8 fstop lens would cost a fortune
@air85363 ай бұрын
I cast Google! Looks like you can actually get an f/1.4. 16mm lens from SIGMA for $350. I think it's the high focal length lenses where you get exponentially more expensive with wider aperture just due to the fact that aperture is focal length / diameter. So if you have a 400mm lens, for f2 you would theoretically need a 200mm wide aperture
@tomcattermole18443 ай бұрын
@@air8536ackshully, the perceived depth of field is larger as you go wider and even then lens you mentioned would have a barely noticeable amount of bokeh. Original comment is right because quality wide lenses with low apertures do cost a fortune but the amount of depth of field has less to do with the aperture than it has to do with the crop factor. If you take two lenses (example: 50mm + 16mm) with the same T-stop/f-stop, the depth of field on the 50mm will look much smaller however it will be technically the exact same. The only way to get any real noticeable depth of field on a wide lens is to have the subject very close to the camera and have the background far away, but these shots are very hard to justify in most storytelling. Side note, cheap wide lenses suffer more from chromatic aberration, distortion and other unwanted artifacts than cheap narrow lenses do, so generally we expect (good) wider lenses to be more expensive.
@gordonbrinkmann3 ай бұрын
Averaging the trackers can be done even more convenient... there is no need for a Sample Index node. After separating the children plug the tracker collection's Geometry output into the Attribute Statistic node, set it to Vector and Instance and plug a Position node into the Vector input. Then you get the Median output you want.
@shmuelisrl3 ай бұрын
if you are anyway using geometry nodes, you could use simulation nodes to transition between the focuse.
@alkalys13 ай бұрын
event the 9 rays to project points , its could be done more in geometry nodes(witout the "tracks" empties), right ?
@shmuelisrl3 ай бұрын
@@alkalys1 probably. but only mentioned this because, otherwise you would have to back to keyframes...
@Benn253 ай бұрын
this is very clever! I did it only with geometry nodes, a grid of points that raycast onto the geometry, and a sim zone to dampen and delay the focus changes. I also exposed an option to change the method of averaging the position of the focus point : mean, average, closest and farthest, this was very nice!. it was working, and pretty well I would say... only on a very simple scene! but in real life scene, the performances were horrible (raycast super slow if I recall correctly, even if I remesh very dirty the objects..), so... I just gave up! :D your method seems very good, and totally usable, even if I am not a fan of baking keyframes (my method was 100% procedural).
@kpasta61123 ай бұрын
Now there's white balance in blender, it would be cool if you could completely recreate a shitty mobile camera and make everything auto. Auto exposure (making sure it also increases noise in dark scenes), auto focus, auto white balance
@modusmogu2 ай бұрын
I love the idea
@WanerRodrigues3 ай бұрын
Hey CG guy, your Brilliant ad got a bug on the mic, low volume and only on the left side. Cool video btw, thank you!
@HAWXLEADER3 ай бұрын
It's on purpose. He loves doing this low budget look and using webcams and stuff.
@amirudinification3 ай бұрын
Love blender bro..
@XXXMakabaka18803 ай бұрын
One hour ago! Fresh and tasty tutorial =)
@thalles34423 ай бұрын
Man, you really look like T.Folse Nuclear, is he your twin brother?? Great video btw
@a3haus3 ай бұрын
octane has this build in
@whynotanyting3 ай бұрын
Epic bathroom
@oliverdive97593 ай бұрын
For me it's 5:45 morning so yeah 👍 I will watch this again later with 100% focus
@hermano81603 ай бұрын
Dude, the setup is so damn time consuming (and will not make my viewport any faster) that I'd rather pull the focus manually, on the fly - will get me more controlled results anyway. Still, kudos for the nice Blender exploit!
@SidewaysCinema3 ай бұрын
You just cooked
@markuszeller_official3 ай бұрын
Everytime I watch one of your videos I feel dumber.
@skeleton_craftGaming3 ай бұрын
actually only half of me has heard of square space...
@macksnotcool3 ай бұрын
doodoo
@aka121763 ай бұрын
First
@Grefins9993 ай бұрын
So tried my hand at a little script: import bpy import bgl import gpu from gpu_extras.batch import batch_for_shader from mathutils import Vector def update_empty(scene): camera = scene.camera camera_location = camera.matrix_world.to_translation() direction = camera.matrix_world.to_quaternion() @ Vector((0.0, 0.0, -1.0)) ray_origin = camera_location ray_target = ray_origin + direction result, location, normal, index, object, matrix = scene.ray_cast(bpy.context.view_layer.depsgraph, ray_origin, direction) # If ray hits an object, update the empty's location if result: print("Ray hit object at location:", location) empty = bpy.data.objects.get("Empty") # If the empty doesn't exist, create it if empty is None: empty = bpy.data.objects.new("Empty", None) scene.collection.objects.link(empty) # Set the empty's location to the hit location empty.location = location else: print("Ray did not hit any object") bpy.app.handlers.frame_change_post.append(update_empty) Basically this creates an empty (or uses an existing one) and projects it onto the surface of whatever object is dead center in front of the camera. Each time the frame updates it also updates the location of the empty. I guess you could the use the empty as a target for your depth of field, or something else.
@Grefins9993 ай бұрын
Oh, also, make sure you move to another frame in the timeline after you've run the script. The created empty won't show up before that. If there's nothing in front of the cameras center point, the empty won't move either. Check the console. Kinda hacky, but it works in 4.2. Or at least it does on my machine :D