Sorry for the short break :) I hope this video was worth the wait! Also, thank you so much for 10k subscribers!! EDIT: At around 1:17, I called OpenGL a graphics "library" which isn't the right word. OpenGL is an API, not a library!
@dynastylobster89572 жыл бұрын
i know this isn't entirely realistic, but what if you blurred the noise somehow instead of using accurate information?
@ahmeddawood88472 жыл бұрын
use vulcan ?
@ssj3mohan2 жыл бұрын
You could do it for unity the [ Doomed Engine ] Why not right ?
@localareakobold91082 жыл бұрын
if your raytracing takes lesser resources I'll buy it
@teenspider2 жыл бұрын
*s h o r t e s t* break of all time (joke)
@mgkeeley2 жыл бұрын
A quick hack for fast antialiasing is to cast the rays through the corners of the pixels instead of the centers. It's basically the same amount of rays, but you can average the 4 values for each pixel and get 1 step for free. Adding your offsets will improve the antialiasing over time as you currentlly have it.
@waldolemmer2 жыл бұрын
MSAA
@felineboy2 жыл бұрын
Add a fifth ray in the center (with 1/2 the weight, and 1/8 for each corner) and you got the quincunx algorithm.
@mrburns3662 жыл бұрын
I'm not a programmer and I can't wrap my brain around a pixel having a corner.. 🤦♂️ a pixel is a finite point with and X,Y coordinate right? Say pixel 0,0 was in the upper left.. what would be the coordinates for the corners? 🤷♂️ Lol
@mgkeeley2 жыл бұрын
@@mrburns366 good question! When casting a ray through the center of pixel "0,0", you actually cast it though coordinates "0.5, 0.5". The "screen" is virtual inside the raytracer, and has floating point coordinates for where the pixels are. Each pixel is a square with sides of length 1.0. Hope that helps!
@grande19002 жыл бұрын
Basically 4xMSAA
@ProjectPhysX2 жыл бұрын
Your first raytracing video motivated me to implement fast ray-grid traversal in my CFD software for ultra-realistic fluid rendering. The simple stuff already brought me quite far. I'm amazed by the more complex techniques you show in this video. Thank you for sharing your knowledge!
@uniquelyrics2331 Жыл бұрын
that is some quite complex vocabulary
@ctbdjc6 ай бұрын
i can only think of that aerodynamics of a cow video
@pablovega76972 жыл бұрын
Don’t believe it. You just saw a video online. Or used google street view. There’s no way you went outside
@mattiskardell Жыл бұрын
lol
@thehollowknerd3858 Жыл бұрын
LMAO 🤣
@distraughtcat Жыл бұрын
I don’t know how he survived. The light coming out of the window would have burnt his eyes out for sure.
@mattiskardell Жыл бұрын
@@distraughtcat lol
@OllAxe2 жыл бұрын
9:17 One potential solution is to implement motion vectors and move the pixels in the buffer accordingly. That way you can move the camera while keeping old samples for additional data. Note however that newer samples need to be weighted more heavily so that new data is generated for previously invisible parts of the screeen, and that specular reflections with low roughness would look inaccurate as you move around since they are dependent on the camera direction. The latter may help the former a bit but a proper solution might need to put specular reflections in a separate buffer and handle them differently. This is an important part of ray-tracing in Teardown, the SEUS PTGI Minecraft shader, Quake II RTX and many other RTX-powered games, so it's a well-known technique. There might even be papers or tutorials out there that describe how to do it in more detail. I also know that Dennis Gustavsson, the programmer of Teardown and its custom engine, has written a blog post on using blue noise to decrease perceived noise in ray-tracing, and other things about real-time ray-tracing that could be of help.
@NamePointer2 жыл бұрын
Thanks for the interesting insight!
@WilliumBobCole2 жыл бұрын
I came to the comments to say this, though as expected, others have beaten me to it. You're already doing temporal smoothing of the image, may as well not throw out the entire buffer. Obviously the more the camera moves, the fewer previous frames will be useful, but it's still way better than starting from scratch any time the camera moves
@oskartornevall8265 Жыл бұрын
If you don't care about object movement, then simply reprojecting the samples based on the difference in camera movement / rotation and filtering based on projected vs real depth of the pixel works (a spatiotemporal filter, if you care about terminology). This is used in GTAO (Ground Truth Ambient Occlusion) if you want to look at an example of such a filter.
@convenientEstelle Жыл бұрын
@@oskartornevall8265 Ground Truth Ambient Occlusion
@oskartornevall8265 Жыл бұрын
@@convenientEstelle Yes, thanks. Was tired when I wrote that and misremembered the name :)
@JJIsShort2 жыл бұрын
When I was implementing a raymarching algorithm, a lot of my stuff looked fake. Thanks for the new features for me to implement. Something I did use was an AABB optimisation. I went from being able to render about 15 objects in almost real time to way more. If you want more frames, it's quite easy. You have also inspired me to implement ray tracing and try to make my own engine. Thanks.
@KingBobXVI2 жыл бұрын
One simple change to consider: look into different color spaces for image processing, RGB is very intuitive because it's what displays use, but it's not really the best option for things like blending values together - actual color info can get lost and coalesce into muddy grays pretty easily. If you do all the math in HSV color space though, you can do blending the same, and maintain better hue and saturation while you blend before converting back to RGB for display.
@omnificatorg44262 жыл бұрын
The main advantage of RGB over HSV is linearity, so you can easily add and multiply the values. Of course, don't forget about gamma correction, or you will get dark gloomy colours. The mean of #FF0000 and #00FF00 is #BBBB00.
@ThrillDaWill2 жыл бұрын
Great video!! I’m excited to see your new projects! Don’t stress too much over them and try to have fun!
@StormyMainShorts2 жыл бұрын
hola
@monstrositylabs2 жыл бұрын
I only subscribed two hours ago. Looked at the date of your last video and assumed this channel was dead. Then coincidently you post the first video in a year 10 minutes after I subscribed!
@WhiteDragon1032 жыл бұрын
If you separate view-dependent lighting (reflections) from view-independent lighting (lambertian) you can keep the view independent lighting buffer while moving the camera. If you move an object though, you'll have to reset both buffers.
@forasago2 жыл бұрын
Or you just accept that indirect lighting will lag behind / ghost a little. Only direct light / shadows need to keep up with the full framerate to look okay. Indirect lighting lags behind in basically every game engine, even Unreal 5.
@dazcarrr2 жыл бұрын
other channels may do this sort of thing, but none go quite as in depth on the technical side as you do. the 10k subs are well deserved!
@bovineox11112 жыл бұрын
Super stuff - always wanted to create a raytracer myself, did a bit of work but I think that the hardest bit to do quickly, is sorting the objects and determining the nearest collision.
@evannibbe93752 жыл бұрын
The better solution to avoid rendering from scratch when the camera moves is to save the colors found, not as a buffer based on what appears on the screen, but instead as a buffer of what color should be associated with each piece of 3D objects those rays hit (color in this case being the total light that part of the shape could be considered to emit, which is averaged with the new calculation for that point). The one downside of this method is that it will require a lot more memory associated with each object in the scene, (sort of like a baked light map texture), and that more metallic objects will take a bit longer to converge (since their lighting changes considerably with camera movements).
@spacechannelfiver2 жыл бұрын
You can do an optimisation by rendering into sparse voxel space instead of screen space. All of those dot products you calculated from the lights are still the same within voxel space, you can just cull the non visble voxels and recalculate whatever lights are in screen space if they move / change intensity. It just becomes a data management task which is much faster. Lumen works like this AFAIK.
@oskartornevall8265 Жыл бұрын
If you want even more realistic material behaviour, try looking into GGX scattering, it's a microfacet distribution, meaning it models the materials as a ton of microscopic mirrors oriented depending on smoothness etc. Great video btw!
@jorgeromeu2 жыл бұрын
Hi, a month ago or so i finished my bachelor's thesis which revolved around path tracing. This video explains it better than I've seen anywhere else!
@caiostange27702 жыл бұрын
Hello! A fix for not having accumulation when moving the camera is: instead of merging frames directly, take into account a velocity buffer. This should tell how much each pixel moved each frame. With that, it can combine pixels with previous ones even if they moved. TAA does this as well, you should look into it
@alex-yk8bh2 жыл бұрын
Proud to say you're the reason why I disable adblock sometimes! Such a great piece of content. Congrats.
@2002budokan Жыл бұрын
Being able to summarize the entire ray-tracing process, its finest details and professional touches in such a short video is a special ability. Thanks.
@marexexe73082 жыл бұрын
The visuals in this video is stunning! Great job! I enjoyed every frame of the video
@minhlucnguyen7614 Жыл бұрын
I'm learning 2d art, watching your video makes me realize the way an artist decide the hue, satturation, value of a spot on the painting is exactly like how ray tracing work. The video is very fun and comprehensive to watch!
@GaryMcKinnonUFO2 жыл бұрын
Very cool indeed. I wrote my first tracer in BASIC, only Phong shading and of course it took hours to render a single polygon but it was a good exercise, makes matrix multiplication actually interesting :)
@monuminmonumin67832 жыл бұрын
i love that you're learning all this, sharing it and especially that you're putting in the Effort. Great Work! i'm hoping for more advanced Versions, just because i'm curious how far you can come!
@shitshow_12 жыл бұрын
Absolutely Amazing. I'm an undergrad. I've been very enthusiastic and learning 3D Computer Graphics from 9th grade. You put all my learnings in a nut shell which gave me a good Recap. Thank you so much ❤
@yooyo3d2 жыл бұрын
You can use Multi Render Target extension to render stuff in multiple buffers at same time. Use those additional buffers to store current state of "recursion". Be wise to encode only necessary things in those buffers. Then just iterate multiple times over those buffers and image will get better and better.
@frankyin8509Күн бұрын
this is a boost shot on my personal project to develop a tiny physics engine. thx a lot. merry Christmas: D
@novygaming5713 Жыл бұрын
One mistake I noticed is that reflective spheres have dark edges. This is caused because dot product shading is still being done for non-diffuse materials. The solution is to interpolate between the shaded brightness and a full brightness as the roughness goes down.
@Supakills101 Жыл бұрын
This is a massive improvement well done. Leveraging hardware acceleration would take this to another level.
@kamranki2 жыл бұрын
Lovely video! I love how you go out of your way to explain everything visually while keeping it simple. I am glad to have found your channel.
@lonewolfsstuck2 жыл бұрын
Should add a de-noiser post process effect, would help significantly
@dexterman63612 жыл бұрын
To be accurate, the deep learning algo nvidia uses is called DLSS (Deep learning super sampling). This can theoretically be used with RTX off. This is technically unrelated to RTX (hardware accelerated ray tracing or more accurately, hardware accelerated bounding box checks for use in RTX)
@jcm26062 жыл бұрын
DLSS actually has nothing to do at all with raytracing, and as of 2.0 is essentially just a variation of TAAU with a machine learning model taking care of when to reject previous frames and how it should blend frames together.
@sjoer2 жыл бұрын
You could also draw a circle where the ray intersects a surface, this could help you with indirect lighting around objects as you can use it to average a larger area! I think I saw Unreal implement this, it is called splotch mapping.
@ravenmillieweikel38472 жыл бұрын
A way that the noise while moving problem could be fixed is offsetting the memory buffer's pixels by the depth buffer in the direction of movement rather than completely starting over Another way to get rid of aliasing is to downsample, that is, render the entire screen at a higher resolution, then scale it down.
@the_wobbly_witch2 жыл бұрын
best way to implement bloom imo is to make it have 0 threshold, but make it so that the bloom increases exponentially. and have 2 bloom levels, one for large screen area bloom, and one for small screen area bloom.
@martinevans89652 жыл бұрын
Best video on this topic on KZbin. So well explained and great result.
@hamzazafar51822 жыл бұрын
Omg thank you! This was an extremely useful and simple tutorial. I'm not sure if I would have been able to install without it
@Raftube022 жыл бұрын
I think that one way to stop noise resulting from using random numbers independently of each other would be to use perlin noise, because then the color of the pixels would be more related to each other.
@Layzy3D2 жыл бұрын
If you continue this raytracer, you could add pbr materials and fresnel (for the moment it looks like you blended between metallic and diffuse materials)
@thanzawoo33892 жыл бұрын
after browsing through so many channels. Yours is by far the best. The explaining thod is so great and detailed even complex stuff is
@DaveeeOnTop Жыл бұрын
I found the Sebastian Lague video also very informing, I think it wasnt out by the time you wrote your comment, but if you're still interested, I'd recommend you watch it
@michaelleue75942 жыл бұрын
I imagine that color from pixel to adjacent pixels is generally pretty strongly correlated, actually. If you could implement it with a statistical element that could estimate the minimum distance between pixels at which the correlation disappears, you could use the gpu to output pixels that are at least that far apart to start with, and then use a different, more directed method to output pixels close to those pixels.
@zelcion2 жыл бұрын
Okay, I got this recommended on my KZbin front page, and i have never seen any of your videos. This is it, you're making it big. By the way, haven't i looked at the view count and subcriber count, I would think this was a big production of a 500K sub channel. Great work! Got my sub!
@londongaz22 жыл бұрын
Great video! You've inspired me to work on improving my own rt engine which suffers from many of these similar problems.
@djpiercy12352 жыл бұрын
I think a smart way to reduce noise while minimising performance impact would be to reduce the amount of indirect rays depending on the roughness of the surface. A surface with a roughness of 0 should only need to emit one reflection ray, since the light can only bounce in one direction. A surface with higher roughness would need a lot more samples, since the cone of directions that the light can bounce off in is so much larger, you need more rays to fill it up.
@nahomicastillolecca37192 жыл бұрын
THANK YOU!!! TNice tutorials is such an amazing tutorial. I just got soft soft today and was playing around on it but had no clue how to really use it.
@yash11522 жыл бұрын
i am super happy to see many many comments about how the part with camera movements and reusing the data can be done. soo many of points are given in comments. seems it would be enough to serve a separate video on its own (:
@cgpoly34192 жыл бұрын
I just finished an pathtracer for an university project. I am currently rendering the 30sec animation and hope it finished rendering by the deadline in two days. While my project is quite diffrent (it doesn't even try to be real time because it wouldn't work with our scene and we don't use our sky map for lighting since just contains stars and wouldnt contribute an significant amount of light (it's a space scene)) some of the problems where the same especially the rewriting of some functions to make them non recursively. Its reassuring to see that I am not the only one who is annoyed by some aspects of OpenGL.
@jcm26062 жыл бұрын
This isn't an issue with OpenGL, rather it's an issue with GPUs in general. GPUs don't have a stack, every function call is inlined and all automatic variables exist in a shared register file, so it's not possible for a GPU to support recursion, at a fundamental level. You *can* emulate recursion via iteration, by creating your own stack structure and dynamically appending to and iterating over it, but this will cause coherence issues and will significantly worsen register pressure, which can result in performance plummeting.
@youtubehandlesux2 жыл бұрын
You could improve the realism of the scene easily with some tonemapping algos, they basically imitate how eyes or cameras perceive different strengths of light (e.g. color desaturates at high light strength while not straight up becoming #FFFFFF), as opposed to just a simple gamma function
@malachyfernandez62852 жыл бұрын
this has inspired me to make a raytracer from scratch, in scratch.
@chrisfuller12682 жыл бұрын
One of the best descriptions of ray-tracing I have ever seen. Most of the other descriptions are full of jargon invented and only used by a very small group of people, making the descriptions incredibly hard to understand by someone who needs to work with ray-tracing every once in awhile, not every day.
@notgartificial85912 жыл бұрын
I recommend adding something called "fresnel" to the engine since the ground plane is looking a bit flat near the horizon. The steeper a ray comes in the more reflective the object gets. This effect gets weaker the rougher the object is. It is also a mandatory feature if you want photorealism since our brains know something is off. I also recommend adding caustics because it also affects realism. When computing indirect lighting, you should make rays bounce off reflective surfaces and if it reaches a light or a bright surface, you light the original surface accordingly.
@szybowiec21402 жыл бұрын
THIS TUTORIAL REALLY WORKS I AM FROM PHILIPPINES! THIS MAN DESERVES A SUBSCRIPTION!
@oscill8ocelot2 жыл бұрын
So glad I subscribed last year. :3 Excellent stuff!
@nickadams23612 жыл бұрын
Man this looks like a very fun project to think through
@abenezertena64412 жыл бұрын
I thaught Graphics programming was a rocket science, you inspired me a lot, Thanks
@christophercoronaios47322 жыл бұрын
Great job man!! I find your ray tracing videos very helpful and informative. Please make more!
@ruix2 жыл бұрын
Real cool project. But next time you should raise your volume a bit
@fghjkcvb26142 жыл бұрын
Great to see you again!
@BossBeneBaby Жыл бұрын
Hey great video. In 2021 Khronos realeased the Raytracing Pipeline for Vulkan. It supports all modern graphics cards (even AMD) and its incredibly fast. I managed to write a realtime Pathtracer and even with 4k resolution it is possible to render in realtime.
@blacklistnr14 ай бұрын
12:20 [no recursion, what to do?]: You can always convert a simple recursive function to iterative by manually using a stack in place of the automatic call stack you're used to. so: 1. make a struct with the args 2. make an array of that struct with a MAX_SIZE, push when calling, pop to return 3. if your function makes the recursive call in the middle/multiple calls: split it into blocks controlled via a switch and a jump_point argument, move any needed locals to args as well 4. abuse the power of this knowledge :)) P.S. for your case(struggling with FPS already), I think that just running computeSceneColor again would have been to expensive anyways
@SCPokSecondaccound Жыл бұрын
Consider enhancing the visuals with Tone Mapping, like adding bloom for a better appearance. Address the noise issue by employing two frame buffers: first one for the current frame and another for the previous frame. Displace the second buffer using its motion vector, blend it with the current frame, and display both buffers for improved results. Edit: I know my comment is a bit late, but I hope it still helped, even if you've already fixed it. Edit2: To enhance the prominence of the bloom, consider amplifying its visibility by expanding the radius.
@phillipotey97362 жыл бұрын
This has given me an idea for a quantum renderer. You have each point on the object "be a camera" and save the color value to that point on the surface. The colors are then constantly streamed to the global camera continuously. Each object would have a then texture that would emit light for every direction and would update only when something changes in the objects direct line of sight. It might take a lot of ram so further things are from the global camera would save more dynamically in memory. This works off of the current quantum interpretation that light is a wave until it's collapsed by hitting an object or interacting, and the true color/intensity is chosen.
@timothyoh97152 жыл бұрын
Your content is great man. Keep up the good work
@Matlockization2 жыл бұрын
It was great that you developed something from scratch. I'm sure the software caused no bottle necks in the hardware.
@rigbyb11 ай бұрын
Holy shit, this is amazing. Thanks for making this video
@crestofhonor23492 жыл бұрын
I do love seeing anything ray tracing related
@saricubra28672 жыл бұрын
Wow, i'm liking your path traced engine, looks very natural, unlike the overexposed, overblown Unreal Engine 5 implementation which looks like everything is lit by a giant flashlight extremely close to the ground.
@miguelguerrero33942 жыл бұрын
Very good video, next implementation could be importance sampling, so that indirect rays are biased towards the light sources, significantly reducing the noise
@helenvalencia70732 жыл бұрын
for bloom take your frame buffer, make every pixel under a threshold black, then sample it to a lower resolution and average it with the normal sized one, repeat a few times, and then add that to the original image
@niloytesla2 жыл бұрын
You are my inspiration! I was also tried to make my own 'ray tracing' engine but I couldn't. It's hard, so I just stop it. But now I think I should start over.
@dragoncosmico2 жыл бұрын
you can study the bloom shaders of reshade, they're also written in c
@nafizjubaer17172 жыл бұрын
Trying using motion vectors to offset previous frames when combining it with the current one during denoising., ie Temporal Denoising
@snk-js5 ай бұрын
wow this youtube video is a masterpiece
@darltrash2 жыл бұрын
Cool project, man!
@JorgetePanete2 жыл бұрын
I made one in Java without GPU too but projecting photons, much... slower, used as a showcase of Java
@sannfdev2 жыл бұрын
What about a de-noising algorithm? You could reference surrounding pixels to remove noise.
@anerdwillhackit2 ай бұрын
Amazing work!
@qdriela2 жыл бұрын
0:56 most relatable thing ever
@nicolacarlomagno272 жыл бұрын
where I’d record one track of the soft and than use a second Edison to record scrubbing through the soft to mimic a wave table.
@mastershooter642 жыл бұрын
oh that's a nice idea 0:11 I should make videos about the physics engine that I wanna write so that people can give their feedback and I can improve my physics engine
@jackcomas2 жыл бұрын
It's really great if your computer is near the windows, especially when you're coding a Ray Tracing Engine
@hannescampidell2 жыл бұрын
cool project i couldnt make this perfect result
@FelixNielsen2 жыл бұрын
Regarding using a buffer of previously rendered frames, to reduce noise, not working once the camera has been moved, of is moving, it seems to me a reasonably good approximation som simply transform the image in the buffer, to fit the new perspective. Of course it isn't a perfect solution, but it shouldn't be all that difficult, or resource intensive, compared to the other work being done. Furthermore, it might be worth considering weighin the buffered frames, so that earlier frames has a lesser impact, than later rendered frames, but I don't really know it that is worth the effort, or indeed, has the desired effect.
@thecoweggs2 жыл бұрын
This is actually really impressive
@alfred41942 жыл бұрын
Amazing content once again
@NavyCuda2 жыл бұрын
I don't think I've laughed so hard in a long time... We used to joke about that unnatural light in the outside world that burned our skin. Let's stay under the warm comfort of florescent lighting!
@VRchitecture2 жыл бұрын
Everything outside of virtual space is unnatural ☝🏻
@ThePixelisaThor2 жыл бұрын
Clean comeback !
@hutchw29472 жыл бұрын
I love you im surprised you don't have more subscribers
@flameofthephoenix8395 Жыл бұрын
You can probably just have it to where each time the camera moves average each pixel in the buffer with the current pixel, then add it back to the buffer, effectively making the program still use the old rendered bits, but take them with a grain of salt. A better explanation is to say that you've rendered pixel A, B, and C, then the camera moves so now you have pixel D, but all the previous ones aren't accurate. So, you are going to alter the previous pixels in the buffer entirely, A = (A+D)/2 B = (B+D)/2 C = (C+D)/2, then the next pixel you render becomes (A+B+C+D)/4. This of course will still have a little blur, but less so.
@peacefulexistence_2 жыл бұрын
Afaik RTX "raytracing cores" just hardware accelerate Ray-Triangle intersections. By machine learning, did you mean DLSS, which uses machine learning to upscale frames so that they don't have to be computed at full resolutions?
@NamePointer2 жыл бұрын
I watched this video from NVIDIA a while back: kzbin.info/www/bejne/bICVc2x4j86NoLM From my understanding, it implies that RTX cards have to deal with quite a bit of noise, and that they use neural networks for denoising.
@peacefulexistence_2 жыл бұрын
@@NamePointer afaik there's no special hardware in the RTX card for denoising. It's just an NN which you can run on an non-rtx card. Same for DLSS. Thus both are post-processing steps done in software. The GPU just accelerates some of the computation in the NN with the tensor cores (I haven't looked into the tensor cores much)
@ABaumstumpf2 жыл бұрын
@@NamePointer You might want to watch that video again and then follow the links they provide in the description. The short version: No, RTX is just the brand-name and behind it are just a Vulkan, Dx12 and OptiX API that lets you ... trace rays. The denoising is an entirely separate library. Or more accurate - not just one but multiple libraries with multiple approaches and algorithms, each designed for specific applications with their own pros and cons. But just "RTX" does not mean anything as it is a marketing-term that encompass Vulkan-Raytracing (vendor agnostic) that does just ray acceleration.
@NamePointer2 жыл бұрын
When I said "RTX technology" I didn't mean hardware specifically, but the entire Ray tracing package NVIDIA gives game developers to add ray tracing to their game. If I understood correctly, that one uses Neural networks for denoising.
@ABaumstumpf2 жыл бұрын
@@NamePointer"If I understood correctly, that one uses Neural networks for denoising." There are multiple libraries included, one of which uses the result of a trained network, others are just "simple" postprocessing or temporal solutions.
@Vioxtar2 жыл бұрын
This video was great, but the intro had me thinking you're going to tackle the primitives-only issue with some support for meshes.
@NamePointer2 жыл бұрын
I really wanted to, but I ran out of time as ray tracing for meshes involves a whole lot of other techniques new to me
@eboatwright_2 жыл бұрын
Awesome! Definitely looks alot better :)
@范博翔本悟2 жыл бұрын
need the other half of the video!
@silverace_712 жыл бұрын
Cool project idea. Make a ray tracing engine that's optimized for vulkan. (I'm totally not going to turn it into a Minecraft mod)
@spectacledsquirrel2 жыл бұрын
Enjoyed the video, well done mate ! 13:23 pretty sure you could've made it iterative, it seemed primitive recursive (you didn't show much there :l ). Even if it is strictly recursive, you could've simulated it with a stack to see if your idea works as intended.
@quads44072 жыл бұрын
Is it possible to paint the indirect illumination on the spheres so the indirect illumination is recalculated only when the spheres move instead of when the camera moves ? Also each new frame would reduce the noise even if the camera is in a different position
@NamePointer2 жыл бұрын
It would be possible for objects with a high roughness, where the color of a point on the object stays the same regardless of where the camera is. Baked global illumination is actually really common in current engines, ray-traced or not, but I don't know how much such hybrid forms are used
@omarlopezrincon Жыл бұрын
AMAZING !!!
@pablovega76972 жыл бұрын
Great video! Hope more come soon
@ImMagicDesigns2 жыл бұрын
Hey! Thanks so much for this video!
@NamePointer2 жыл бұрын
Glad you liked it!
@ImMagicDesigns2 жыл бұрын
@@NamePointer that was not my comment xD
@eborge9711 Жыл бұрын
Hey! You are SO CLOSE with getting it to work! You do not have to reset the buffer when moving the camera! Instead, google about temporal reprojection! You take per vertex motion vectors (the kind used for motion blur in modern games) and offset the pixels in the buffer with it. That way, the surfaces that are new pixels get rendered, and the old surfaces accumulate samples. This has problems of its own. If you separate the bounce lighting and just use a basic N dot L, you can use a bilateral blur based on depth pass, along with the temporal effect to help sell it a bit. This doesn't work for objects with parallax like specular reflections, but for diffuse it works wonders! Oh and of course, multiply an albedo texture like usual afterwards. You can combine a lot of different effects. I've seen games save ray traced lighting to UVs for worse case or screen space effects, and then ray trace for the best case. The way UE5 does it is ray trace only on the lowest LOD versions of models.
@NamePointer Жыл бұрын
Thank you for the detailed explanation. I will definitely consider it for my next attempt!
@sinom2 жыл бұрын
instead of discarding all the info whenever the camera moves you could calculate the motion vectors and modify the current colour buffer based on that