Note from Lewis: Just wanted to clarify a few things. I realise in hindsight that the way I explained the path tracing process is a bit misleading. I described in the video that you shoot out a new set of rays for the indirect light, and then average these to get the final indirect light colour for each ray. This is not strictly true for the way that path tracing works. What I should have made clear is that you actually shoot multiple rays into the scene (per pixel) and then average over all these samples to get the final calculated light for that pixel. You still need to shoot off rays for calculating the indirect light, but now you typically only need to shoot one ray out (although I’ve seen this done with more than one in the same approach as described in the video). The main difference essentially is that you shoot multiple initial rays into the scene to sample both direct and indirect light together, rather than shooting multiple rays out for the indirect light at each bounced ray. To be clear, the process of averaging rays using the Monte Carlo method and choosing new rays to bounce from the hemisphere is the same, it’s just that rays are sampled at the pixel level, rather than when calculating the indirect light. This is why it’s called path tracing, as you trace a path of a ray that bounces through the scene, and then average over a bunch of these rays, which ultimately give you a prediction of the total light for that pixel from following the most likely path. You could argue that what I described in the video is closer to recursive ray tracing, rather than strictly path tracing. I hope this explains things better, but if this is still confusing, I recommend giving the following article a read: www.techspot.com/article/2485-path-tracing-vs-ray-tracing/ I should have clarified this better in the video, sorry. Cheers to those who rightly called out my mistake
@nbvdkamp2 күн бұрын
Respect for acknowledging your mistake, but there's already a lot of confusion around different forms of raytracing and related rendering algorithms and it seems like this video will make it worse. Rerecording with a correct explanation would be my preferred solution as most people won't go into the comments to read errata
@WunderWulfe2 күн бұрын
@@Computerphile I believe your approach is called “Branched Path Tracing,” however it becomes too resource intensive due to the nature of recursion, branching, and unpredictable memory and stack usage, which is partially why sampling several times produces better results in terms of time taken, and you can also compute these samples iteratively with no recursion at all, and the stack becomes one color per layer which you can compute a rolling average with, or a single color and weight value to do a rolling weighted average with there are also some additional quirks like using HDR in the renderer, so that lights and emissives are considered brighter than white objects
@Dayanto2 күн бұрын
@@WunderWulfe No, that's something quite different. Branched Path Tracing only applies on the first hit, and is mainly about splitting different material effects (diffuse, reflection, refraction, etc.) into separate passes, with customizable sample counts per type. After the initial bounce, it is just regular path tracing. It doesn't keep recursing. The main advantage of this is that you can crank up the number of samples for a specific effect without affecting everything else in the scene. The disadvantage is that you get worse aliasing, since you're starting a bunch of rays from the same position.
@Dayanto2 күн бұрын
7:27 This seems very wrong. Isn't the entire point of *_Path_* Tracing to follow one complete _path_ at a time from beginning to end (i.e. "depth first") instead of recursively fragmenting into many new rays on each bounce ("breadth first")? You basically just keep picking one direction at random and repeat the whole process many times. By averaging over lots of paths, you still end up exploring many different directions, but without the exponential growth of (increasingly useless) samples for deeper levels. There are many tricks to improve path tracing beyond these basics. For example, there's a common tactic called Bidirectional Path Tracing that starts from both ends and connects them in the middle, to minimize the number of wasted rays when either the camera or light source is hard to get to from the other side. There is also Importance Sampling, which is about increasing the density of samples where it really matters.
@manon-gfx2 күн бұрын
Yeah this example in the video is recursive ray tracing where you get this ‘ray tree’ problem. The whole point of path tracing is NOT doing that hahaha. The only time you trace an extra ray is with next event estimation, where you trace a shadow ray to a random light to pretend your next bounce was to that light.
@gottagowork22 сағат бұрын
And then utilizing probability weights and russian roulette to determine which ray path to follow. I would say the difference is: Recursive raytracing sends one ray to a pixel and takes all recursive paths from there, no matter how little they contribute. Pathtracing sends multiple (hundreds) of rays to a pixel and each take one random path until the very end. Then, noise treatment. Furthermore, global illumination pre pathtracing could come from radiosity, photon mapping or prebaked probes. So the concept of global illumination existed long before pathtracing became a thing. Also, modern game engines will utilize prebaked probes, sometimes converted into light maps added to the realtime rasterized component. If the mechanism is in there to do materials correctly, a path tracer is a no brainer to use. A game engine may require a lot more twiddling.
@stevecummins32417 сағат бұрын
There may be a functional equivalent to ray tracing that completely avoids need to follow paths of any individual rays... While studying mechanical engineering i was taught about an idea called vector fields... an example of such would be gas velocities within a given volume. Such fields were usually defined algebraically. If provided with details specifying eg the location in space... the velocity vector at that location is returned. Ie the vector field encodes *all* possible vectors. Can also do things like finding the flux/flow through defined surfaces... That seems like a possible approach allowing pre computation all possible bounces of light rays around a scene. Combine Such a vector field with illumination field, then sample it with a" view" port surface/cut... that should be a rendered scene.
@codahighland15 сағат бұрын
@@stevecummins324 Theoretically, sure. Assuming your scene is a continuous vector field, you can take an antiderivative of the whole field with respect to the camera and the light source and at least hypothetically get a closed-form solution for the rendering equation. Unfortunately, doing that is absurdly complicated and probably even worse for performance for any nontrivial scene -- even if the final computation is faster, the manipulation of the data to find the field and its antiderivative when things move around is most likely going to cost more than you save.
@stevecummins32410 сағат бұрын
maybe transfer functions are a better way of explaining my idea? light has additive properties. transfer functions have compostible properties. A single light transfer function for the entire scene could be built up by summing the transfer functions of how light reacts to the transfer functions of each and every component of the scene. If a component moves, subtract the component's original transfer function, and add the transfer function for it's new position. feeding the illumination through the transfer function, would result in a rendering of the entire scene, including parts that can't be seen. transfer function for viewport to remove the hidden aspects or rendering/lazy evaluation to avoid unneeded calculations.
@cbuchner13 күн бұрын
Next Video: Importance sampling and bidirectional path tracing
@eigentensor2 күн бұрын
Too bad he already doesn't understand unidirectional PT :) This video was pretty bad unfortunately
@peterprokop2 күн бұрын
In the late 1980s we used the "radiosity" algorithm to do indirect global illumination. You basically break up the scene into small polygons, and them you render the scene fro the view of each polygon, and distribute the light from the polygon into the entire scene. You repeat this until an equilibrium is achieved,, and you end up with a scene that is "prelit", and it actually looks pretty amazing. Not sure where this is used today, but in the 1980s it looked pretty amazing and allowed realtime-rendering on the Silicon-Graphics workstation we used at the time.
@KitulousКүн бұрын
> Not sure where this is used I'm pretty sure I've seen this in Cinema 4d. it's like the entire scene was covered in voronoi diagram-like patterns, with black (or colorful I don't remember) dots in the centers of these regions
@SeanTBarrettКүн бұрын
Radiosity was used for Quake 2's lightmaps (Quake 1 used direct light only), and is probably still used to create lightmaps in many videogames.
@shaytal1003 күн бұрын
"this clip rendered with Blender cycles not path tracing" Cycles is a render engine that uses path tracing!
@Computerphile3 күн бұрын
I wasn't sure I had the path tracing switched on for the initial renders, hance clarification - the later render probably did -Sean
@shaytal1002 күн бұрын
@@Computerphile Not an expert, but I don't think you can switch of path tracing in Cycles. It also does not make much sense. If you don't want path tracing you would use the EEVEE (Blender default) render engine.
@theftking2 күн бұрын
@@ComputerphileCycles is a path tracer. There is no way to render in Cycles outside of path tracing - there's no rasterized fallback or anything like that. Blender's rasterization renderer is called EEVEE and has gotten quite excellent recently with the inclusion of raytracing. That said, they're mutually exclusive. A renderer either uses path tracing, rasterization, or something else; path tracing isn't a "feature" that gets activated atop rasterization. That would be raytracing, where the rays are only used for shadows/lighting/reflections. Cycles raycasts to determine the literal color of every pixel.
@Computerphile2 күн бұрын
Ok, though I can't remember exactly how I did this but for the later render it took hours (I'd increased the depth) whereas the initial render was quick & had shown a fairly harsh shadow rather than the gradiation Lewis talked about, so perhaps depth was low
@yooyo3d22 сағат бұрын
I wrote with my friend our raytracer 30 almost years ago. We did sample many rays per intersection. It took hours and hours to render one image 640x480 on 486 or early Pentium CPU. We used math to describe surfaces because it was faster than dealing with hundreds of triangles. Im glad to see people still research this field on university.
@scaredyfish3 күн бұрын
9:41 Blender Cycles is a path tracer. Or so I thought? It actually would be quite interesting to look at all the tricks raster engines use to calculate lighting. Like baked lighting, light probes, shadow maps, environment maps, deferred rendering, etc.
@OneWizard1713 күн бұрын
Cycles is definitely a path tracer. I assume the animation was rendered using Blender's Eevee, which is a rasterizer that supports many of the features you mentioned.
@ErazerPT2 күн бұрын
Yes, it would be quite interesting as the "bag of tricks" expanded as the power available did. Something like Doom or Quake had a very limited "bag of tricks" because no matter how much you could pre-compute, there was only so much you could use in realtime. Also interesting that, much like even older pre-compute "tricks", as technology advances, most will prove counter productive as you get to the point were "calculate" becomes less expensive than "look up".
@TinyGiraffes3 күн бұрын
11:10 I like the idea that, inside my computer, there is a tiny British man calculating the ray-tracing of me shooting every npc.
@davidmurphy5633 күн бұрын
For best results, let the Brit out once a week for some fresh air. Not on a Saturday night though, that can severely affect performance.
@jeromethiel43233 күн бұрын
Actually, with a modern video card, you have HUNDREDS of tiny British men doing calculations! ^-^
@banderi0022 күн бұрын
Bit of a nitpick: Path tracing IS a type of ray tracing. Ray tracing is an umbrella term used to describe all the rendering techniques that follow rays projected onto a scene. "Classical" ray tracing that use a single ray per pixel is simply just path tracing with bounces turned off. If you wanted a better "3 different rendering techniques" you could have others like cone tracing, or photon mapping, or raymarching- would be awesome to see a video on that!
@georgejones50193 күн бұрын
The only game I'm aware of that uses Path Tracing is Cyberpunk 2077. It comes at a huge cost of performance, most people use it for photo mode.
@CrushaKRool3 күн бұрын
Yeah, although games still need to use some trickery to actually get acceptable performance with these newer rendering techniques. For example by making use of DLSS (Deep Learning Super Sampling) / FSR (FidelityFX Super Resolution), i.e. the actual image is rendered at a low resolution and a neural network on the GPU is scaling it up to proper screen size, generating pixels in between. And with Frame Generation, they can get away with only rendering two consecutive frames and then the AI is extrapolating how the third frame would probably look like instead of actually rendering that frame - at the risk of introducing some visual artifacts.
@riff_wave3 күн бұрын
Alan Wake 2 and Indiana Jones and the Great Circle also use path tracing, though it's of course a very limited number of rays and state of the art denoising.
@calvin73302 күн бұрын
@@CrushaKRool The supersampling AI models can also be trained on high-res, high-quality renders that likely use path tracing
@ethan_2 күн бұрын
It’s totally playable with a 4090
@KillahMate2 күн бұрын
Star Wars Outlaws and the new Indiana Jones game have a path tracing option; Alan Wake 2 has a hybrid renderer that uses path tracing when its graphics settings are set to max. There are also some remixes of older games that use path tracing - Quake 2, Portal, and soon Half-life 2 will have a version with path tracing released.
@WunderWulfe3 күн бұрын
what a lot of engines will do to avoid recursive and stack based solutions is they will sample the same pixel several times instead of spreading rays at every intersection point, a good example being blender’s progressive renderer, and of course, denoisers like OptiX, OpenAIImageDenoise, etc so you can use less samples and “correct” noise in path tracing render
@Matt-vv3tp3 күн бұрын
I'd wager most "real-time" pathtracers do a single path per ray as its more GPU-friendly
@jcm2606Күн бұрын
@@Matt-vv3tp With the introduction of hardware raytracing, modern path tracers can sort of get away with producing a tree of rays where each leaf of the tree corresponds to a unique path. Hardware raytracing implementations tend to procedurally generate new thread groups in hardware for each shader invocation, so the GPU is able to do it somewhat comfortably as each thread is still executing the same shader. Branch divergence and cache locality is still an issue, but that's where shader execution reordering comes into play on NVIDIA hardware, as it allows the GPU to reorder threads within a work group based on a sorting key, so that all threads in a thread group ideally take the same branches and access the same memory regions.
@Petch853 күн бұрын
It was cool with the blender example. But I think you should maybe have shown it a bit more. Cause blender is a seriously powerful tool that is free to use for everyone. And a lot of people have a pc with an RTX GPU (or modern AMD GPU) and blender (cycles) can use the RT cores (OptiX) in those GPUs. Combined with using a denoiser this means that many modern pc's can make relatively good looking frames in blender. (There are stil limitations, but you can do a lot). In many cases you can get away with 50-200 samples per pixel and a depth of 3-5. And blender can renderer pretty good looking frames in 10-30 seconds on many modern pc's. Blender is free and everyone can download it, you can even download "splash artwork" (demo files) and have your pc rendering a relatively complex scene in less than 5 min. I am such a big fan of the blender project, because you can do so much with blender. You can combine images to make a timelaps video, simple video editing, 3D modeling, 3D rendering, different simulations types, you can build on top of it or use python to control blender and much much much more. But for this case you can easily render the same frame with different setting to show the difference between the number of rays and the depth of the path.
@eday244Күн бұрын
Needed this, thx!!!!
@Agustinb143 күн бұрын
There are games now using path tracing (like portal RTX) but besides requiring lots of advanced new hardware, they are also cheating a bit because they do a small sample of rays which result in a noisy frame and then use AI to denoise it. The result is surprisingly good.
@cube2fox2 күн бұрын
They are also using the ReSTIR algorithm to be more smart about where to shoot rays.
@majorjohnson80012 күн бұрын
There are some neural networks out there that can take the grainy images and use that data to estimate the indirect lighting at a real-time frame rate.
@Ace-Brigade2 күн бұрын
I can already imagine quite a few optimizations to the path tracing algorithm you explained. I hope, assume that these types of optimizations are already present in current path tracing implementations?
@dereklindgren8688Күн бұрын
this is a fantastic overview of how path tracing works. i admit i don’t have much of a graphics background, so bear with me. i have a question on the light reflection. Do the calculations take into effect a value or factor of the magnitude of reflection based on the object? For instance, a piece of cloth vs metal? I assume it does, but when trying to compare these kinds of simulations to real life, just curious what else is taken into account. Thank you!!
@mytube0013 күн бұрын
Lewis draws his percentage signs reversed! :)
@rjung_ch2 күн бұрын
Thanks! 👍💪✌
@HexerPsy2 күн бұрын
15:11 I was waiting for this part, but... could have used more detail. You talk about normal maps, which influence lighting effects. But there are also other maps, such as roughness maps and specular maps, which inform the kind of angles to calculate on the material. Also, about the number of samples. Not all bounces are equal. For example, a smooth mirror has a very simple reflection. But a rough surface, scatters light everywhere, and will benefit from more samples. If you have a transparent object, you need more samples. For the reflections on the glass - the refractions - and the objects behind the glass. Its noisy, so most ray tracers and path tracers come with denoising techniques. And sometimes pixes happen to land on very bright values, or some very bright color by chance. These 'fireflies' need to be denoised. And if you want to make an animation, now you can accept more noise - but not too much - but the denoiser might mess with your frame-to-frame differences... Its a hard problem. Honestly, so many nobs to tune, its really interesting!
@nbvdkamp3 күн бұрын
What he describes here is not what path tracing does though? This is some form of recursive ray tracing. Path tracing only picks one direction to send the next ray in at once and doesn't create an exponential tree of rays. It creates a nice average by just tracing more paths per pixel and averaging over those. Terms like raytracing are often poorly defined or used to describe different things but this is definitely not path tracing.
@Matt-vv3tp3 күн бұрын
Edit: I was wrong It is pathtracing, the only defining trait of pathtracing is that it not only traces a single ray intersection (and then to light sources), but a whole set of bounces which end up forming the "path" and creating the indirect "global illumination" where light bounces off of surfaces onto others. The thing you are describing is two separate approaches to how pathtracing can be implemented, where you either have a singular ray path, or have some sort of tree-like structure that keeps track of all of your bounces.
@Keavon3 күн бұрын
This whole video is pretty misleading. It seems to be explaining recursive ray tracing, just like in the previous video. But ambiguously claiming that this one is path tracing, which it isn't. In reality, path tracing is just the modern approach to ray tracing: a monte carlo algorithm for evaluating photon bounces through the BRDFs of scene geometry. Before BRDFs and the rendering equation, in the real early days, recursive ray tracing shown in this video was more common, but it's almost a footnote in history compared to today's ray tracing (path tracing) methods.
@OneWizard1713 күн бұрын
The original innovation behind path tracing was that you should *avoid* recursive splitting like this video presents. See Kajiya's "The Rendering Equation" (1986), which introduced path tracing. You can arguably still call the presented technique path tracing, but it is very misleading to do so in an introductory video.
@nbvdkamp3 күн бұрын
@@Matt-vv3tp Quoting Kajiya's original paper (titled the rendering equation) that introduced path tracing: "... an alternative algorithm for conventional distributed ray tracing. Rather than shooting a branching tree, just shoot a path with the rays chosen probabilistically. ... called path tracing"
@Matt-vv3tp3 күн бұрын
@@nbvdkamp You are right, I've been using the wrong definition of pathtracing, my bad.
@jpalmz19782 күн бұрын
I remember in the Mid 90’s using a program called Imagine to render 3d scenes. I could never get objects to create shadows. Turns out I was using scanline mode instead of path tracing. Also turns out that using a stock Amiga it would take hours to render the simplest of scenes. Fast forward to 2024 and Blender can render in near real time 😊
@Sugondees3 күн бұрын
I can imagine how intensive path tracing must be, especially if you need information from objects not rendered on scene in video games due to them being obstructed from the players POV.
@Great.Milenko2 күн бұрын
to be fair, all raytracing in games will largely ignore any occlusion or frustum culling, it contributes to the demanding nature of raytracing a reasonably significant amount.
@jcnwillemsen2 күн бұрын
Awesome, is the source code from your own path tracer on github by chance?
@hsaka082 күн бұрын
Does 3d rendering Programs Like V-Ray and Arnold or any other Program use this method to render objects?
@sarahburns63572 күн бұрын
The Quake II sound make me look in alarm
@unknownusername93352 күн бұрын
2:31 "Quite realistic for the UK. But in reality, that's not the case" I always suspected the UK isn't real
@adamsparks58773 күн бұрын
Could you path trace and then do a Gaussian Blur on the values you get back on your surface?
@jameshughes30143 күн бұрын
I think games do something similar, except they path trace a low res version of the scene, then scale up the image and kinda mix it with the original image.. as a way of getting smooth light effects with less compute. The issue with that , and with blur is that light smudges go off the edges of things
@Matt-vv3tp3 күн бұрын
You are right, this is a fairly commonplace technique under the term "Filtering", if you look online you can find all sorts of fancy blurring techniques to retain edges, or specular highlights and the like. The rendered image will usually also have its lighting values split from the rendered image, so you only blur the lighting information and not the textures on surfaces (otherwise it'd look like your entire screen is covered in vaseline) and then add it back on top.
@simpletongeek3 күн бұрын
I wonder what's the difference between path tracing and radiosity? Is it just terminology?
@BeheadedKamikaze2 күн бұрын
Radiosity is a bit older technique to estimate global illumination that was sometimes used before so many rays could be shot in a feasible time. It's much coarser because of the triangle resolution and every triangle has to sample every other triangle. It's also not great at capturing actually global illumination from light sources such as the sun/sky, or a high dynamic range image, because those would also require triangulation. You'll get a much more accurate result using path tracing. Also regarding the matrix - I'm not sure how you think that would give you infinitely deep rays. It's not just about scaling the intensity of each contribution, but an iterative approach is required because the incoming indirect light from every triangle will change its output, which then needs to be propagated to every other triangle, repeatedly. The other issue with radiosity is that it doesn't capture the direction of light, so every surface acts like a "glowing" diffuse surface, making it unsuitable for handling specular reflections or refractions for materials such as metals or glass.
@saultube442 күн бұрын
Raytracing should be done in GPUs then, as an option Pathtracing, because of the practicality
@bluegizmo19833 күн бұрын
Cyberpunk 2077 has a real path tracing mode, and it is playable on a 4090 with path tracing enabled.
@SmellsLikeRacing2 күн бұрын
Path tracing is possible in real time with less samples, just have to do noise reduction.
@gemerpyros3 күн бұрын
I think they are using some form of Path tracing in cyberpunk 2077. It's a stretch calling it playable. But it is able to render multiple frames per second. Mind blowing stuff. I wonder how many rays they are using.
@cromefire_3 күн бұрын
But under the hood it's only 720p30 with very few bounces (that's why they have their "ray reconstruction" ML model) and then the AI tries to guess what the image could have been given more time. For sure not close to real VFX software.
@riff_wave3 күн бұрын
2 rays 2 bounces each
@gemerpyros3 күн бұрын
@@cromefire_ahhh well that’s disappointing hahaha. Makes sense tho, would be nuts if they actually managed real time path tracing.
@cromefire_3 күн бұрын
@@gemerpyros I mean it'll only get better and a bit of the trickery isn't too bad I rarely play at native res anymore, but it still has some ways to go until it's "usable"
@gemerpyros3 күн бұрын
@@cromefire_yeah that’s fair. But there is a concerning amount of reliance on upscalers. Especially temporal upscalers. I also use dlss or something similar but I am noticing more and more ghosting in games.
@Sonofamensch2 күн бұрын
For 24-bit color, it would only require 11 iterations of five rays for the denominator to exceed the number of colors renderable, or less than four for the same to occur in each color channel, so you can see why even in the toy example a 3-iteration sample seems reasonable: if each endpoint of a chain of sample rays contributes on average less than one unit even if they're all fully-saturated in different ways, that's probably too much effort to render one pixel.
@bengoodwin21412 күн бұрын
16:15 even this example, if you were to blur the surfaces, would look fairly accurate I think. The green bit on the cube, for example, that was already there
@jortor29323 күн бұрын
It's interestingly interesting
@j7ndominica0513 күн бұрын
A cloudy sky is like a giant frosted lightbulb. You could consider it being many pointy lights. Looks like on the sceene the cube is illuminated by circle above, but its shadow is sharp as if all the light was projected from one point. Why random rays? Shouldn't you try to cover the space evenly? It will be noisy if the randomness changes between frames.
@scaredyfish3 күн бұрын
I think noise turns out looking better. If you sampled evenly, I imagine you would get banding or other artifacts because the sampling would be too even and you would miss fine detail between the sampling points. Actually, noise changing between frames is what you want - if your noise doesn’t vary, it’s very noticeable, but when it changes per frame, it’s looks more like film grain.
@Matt-vv3tp3 күн бұрын
Another thing to consider is how much you would have to cover by distributing the rays evenly. One ray hitting a surface can go in infinitely many dimensions, so lets say we divide the hemisphere into 4 distinct sections so we sample it "evenly". That means that per pixel, we will need the primary ray + 4 bounced rays (or 1 primary ray and 1 bounced ray over 4 frames/samples). The issue arises when you try to do this with the bounced rays, and suddenly you need 1 primary ray, 4 bounced rays and 16 more rays for the bounces of the bounces. Considering modern hardware is targeting ~2-3 rays per pixel per frame (at best) (with what is 1 path, and not the tree of bounces I mentioned above) you would need a crazy amount of samples to even come close to covering the space evenly. edit: a related topic is anti aliasing with raytracing, you might want to jitter your ray start position on the pixel (as technically your limited screen raster is what leads to aliasing, but with raytracing based methods you can sample anywhere on the pixel "for free"). If you divide each pixel into 4 sections, to get a fully covered space you again need 4 samples per pixel (+ all the other bounced rays and such).
@landsgevaer3 күн бұрын
How does this guy render his fives and his percentage signs...!?
@pistonsjem2 күн бұрын
Alternate title: How Path Tracing Makes Video Games Laggy and Unresponsive
@heniiku3 күн бұрын
Thank you Computer-Kulusevski!
@PbPomper2 күн бұрын
I think there are also examples of AI models that have "learned" how shadows and lighting work. So instead of calculating it using complex and expensive physics you use a model. This similar method is being applied for other expensive stuff like fluid simulation. This stuff will have no issue running real-time.
@FrancisFjordCupola3 күн бұрын
Ten years? Let's assume a doubling of ray-tracing performance. Ambitious. Gives 2^10 or 1024x improvement. Roughly 1000x. The best image (at 20:10 in) "may take an hour to render". That's one frame per 3600 seconds. A thousand times improvement brings it to 3.6 seconds per frame. Which is kind of a slideshow. I think some games already use some form of path tracing. You'd just want to improve the algorithm. Perhaps notice when added traces fail to add significantly and quit there instead of keep bouncing around. Perhaps sample a few points and interpolate. Perhaps keep a bit of a "temporal" indirect light map for stationary light sources. In any way, smarter would be the way to go. Taking more and more samples requires ever more time in an exponential fashion. Never want that in a game.
@SMorales8513 күн бұрын
Yeah, modern games use temporal accumulation for raytraced or pathtraced techniques. Each frame traces a few rays, and it takes quite a few frames for the image to stabilize. If you look at Unreal Engine 5 titles using its Lumen global illumination system, you'll notice a graininess in some surfaces, and that objects seem to leave a blurry trail on backgrounds as they move (because the temporal cache being disturbed).
@NotHugs3 күн бұрын
You might want to look at what ReSTIR does. Sampling smarter > working harder.
@Matt-vv3tp3 күн бұрын
"Perhaps notice when added traces fail to add significantly and quit there instead of keep bouncing around" This is called the russian roulette algorithm, commonly used in pathtracers, depending on the "lightness" of the surface it hits (however you choose to define it) a ray might be killed (for example at 20% lightness, it has an 80% chance to die, and then if it doesn't die, you make its contribution 5x bigger to account for all the rays killed early and prevent biasing your pathtracer) "Perhaps sample a few points and interpolate", this is already done in games and hybrid renderers where you are for example only doing pathtracing where other methods fail, or you are doing pathtracing for select surfaces (reflections and such) "Perhaps keep a bit of a "temporal" indirect light map for stationary light sources." There are already world space irradiance caching techniques, and if you have stationary light sources, you might as well bake the lighting instead of doing it dynamically. This comment is not meant in any negative way by the way, I fully agree that fully focusing on "just making better hardware" is semi-futile, but just wanted to point out there are loads of really smart and talented graphics engineers working on it/have already worked on it.
@EyesOfByes2 күн бұрын
Best explanation Ive seen (pun not intended) on pathtracing
@kingofgamer11023 күн бұрын
🔥🔥🔥🔥🔥🔥🔥🔥
@InfiniteQuest862 күн бұрын
Hmmm, it seems like there's a missing step. The red wall shouldn't contribute so much to the white object. If you put a white object near a red wall in real life, there's maybe some very small contribution, but the object is still red. There should be some dampening step or something.
@Bluelightzero2 күн бұрын
I really don't understand the point of this video.
@jesset25502 күн бұрын
Why?
@DanSnipe-k8o3 күн бұрын
The meme is women's dating profiles vs real life.
@MumboJumboZXC3 күн бұрын
I still don’t think ray tracing is required, let alone path tracing. Optimize games first ffs.