How Ray Tracing Works - Computerphile

  Рет қаралды 71,529

Computerphile

Computerphile

18 күн бұрын

Ray tracing is massive and gives realistic graphics in games & movies but how does it work? Lewis Stuart explains.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com

Пікірлер: 212
@RockLou
@RockLou 16 күн бұрын
"We should go outside" rarely uttered words by a computer engineer
@loc4725
@loc4725 16 күн бұрын
Yes, the bravery on show was impressive.
@n30v4
@n30v4 16 күн бұрын
At that moment I knew its AI generated.
@tango_doggy
@tango_doggy 16 күн бұрын
you're thinking of computer science side.. the electrical engineering side entrances people to fly across the world testing different country's fault protection systems
@vinnyfromvenus8188
@vinnyfromvenus8188 16 күн бұрын
literally a "touch grass" moment
@MePeterNicholls
@MePeterNicholls 16 күн бұрын
He struggled on the line “that’s how real life works” tho
@emanggitulah4319
@emanggitulah4319 16 күн бұрын
Awesome CGI... Looked so real having sunshine in Britain. 😂
@shanehebert396
@shanehebert396 16 күн бұрын
Way back in the day, one of our computer graphics assignments in college was to write a parallel ray tracer in C on a SGI Power IRIS server. It was a lot of fun.
@maximmk6446
@maximmk6446 16 күн бұрын
Is that sometime around the late 80s?
@sagejpc1175
@sagejpc1175 16 күн бұрын
You poor soul
@broyojo
@broyojo 16 күн бұрын
first time on computerphile that we touched grass
@vitaly2432
@vitaly2432 16 күн бұрын
I'm new to this channel and I hope it is the first time (and the last) that we touched a monitor, too
@benwisey
@benwisey 14 күн бұрын
I think it may not be the first time touching grass or a monitor.
@lMINERl
@lMINERl 16 күн бұрын
How ray tracing works : make a line then trace it where it will go
@Hydrabogen
@Hydrabogen 15 күн бұрын
One may even go so far as to call the line a ray
@phiefer3
@phiefer3 16 күн бұрын
One thing that's sort of brushed over here, is that while all the things he mentioned about rasterization sounds more complicated than ray tracing; in the early days of computer graphics it was by far the simpler method (heck, in the early early days, things like having a light source wasn't even a thing). Rasterization evolved from the fact that doing something like ray tracing for every single pixel was not even close to being practical, especially not in anything real time. Essentially, rasterization was a shortcut that allowed us to render graphics in a very simplified manner because it was all that the technology of the time was capable of. As technology improved, we then added more bells and whistles to rasterization to improve it, like lighting, shadowmaps, depth maps, etc. These things all made it a bit more complex to take advantage of improved hardware and software, but it was still far easier than ray tracing, which was still beyond what could be done in real time. And this was mostly how graphics technology improved over time: adding more and more bells and whistles to shortcuts built on top of shortcuts on top of shortcuts that was rasterization. But in recent years we've reached a turning point where 2 key things have happened: First is that modern technology is now capable of things like ray tracing in real time; and the second is that all the extra stuff that's been added to rasterization over the years to improve its quality is starting to approach the complexity level of ray tracing. That's why it now seems like ray tracing is such a big leap in quality for such a small difference in performance. The tradeoff is still there, but eventually we'll probably see a point where ray tracing is both faster and higher quality than rasterization based graphics.
@yoyonel1808
@yoyonel1808 16 күн бұрын
Very nice explications thanks you 😊
@jcm2606
@jcm2606 15 күн бұрын
Another thing is that rasterization is starting to reach an accuracy ceiling where it's becoming disproportionately harder to push the accuracy higher. The best example of this I can think of would be complex interactions between different lighting phenomena (like a surface diffusely reflecting another surface that is specularly reflecting a light source, leaving a pattern of light behind on the diffuse surface; think the really pretty patterns reflecting off of water on the underside of a boat hull, that's what I mean). To accurately reproduce those interactions you really need the ability to have light be simulated/approximated in any order (ie you need the ability to diffusely reflect a specular reflection AND the ability to specularly reflect a diffuse reflection AT THE SAME TIME), which is extremely difficult to do with rasterization in a performance-friendly way, because of how rasterization uses a well defined order in its approximations (you could use some tricks like reuse the output of intermediate passes from past frames, then reproject, validate and maybe reweight past frame outputs to make them match more closely with the current frame, but that introduces a bunch of errors which hurts accuracy). Raytracing, on the other hand, gets this basically for free as it's an inherently "recursive" algorithm in that any type of lighting interaction is naturally nested within any other type of lighting interaction (at least for path tracing or recursive raytracing, most games nowadays are using "raytracing" to refer to replacing specific lighting interactions with standalone raytraced variants, so "raytracing" in the context of most current games still has a well defined order like rasterization).
@AnttiBrax
@AnttiBrax 15 күн бұрын
Are you sure about that historic part? Computerized ray tracing dates back to the late 60's. I think you might be only considering things from the real time graphics point of view.
@emporioalnino4670
@emporioalnino4670 10 күн бұрын
I disagree about the small impact on performance, it's pretty substantial. Most gamers choose to turn RT off to save frames!
@IanFarquharson2
@IanFarquharson2 16 күн бұрын
1987, BSc computer science graphics course, same maths, but all night to render a teapot on a minicomputer. PS5 today doing cars in realtime.
@mahdijafari7281
@mahdijafari7281 16 күн бұрын
"It's really easy!" Until you need to optimise it... Great video btw.
@oldcowbb
@oldcowbb Күн бұрын
true for many things in comp sci
@chrischeetham2659
@chrischeetham2659 16 күн бұрын
Great video, but my brain couldn't cope with the monitor prodding 😂
@MelHaynesJr
@MelHaynesJr 16 күн бұрын
I am glad I wasn't the only one. I was screaming in my head
@AySz88
@AySz88 16 күн бұрын
I was curious and googled the model number. It's apparently a ~2012 monitor discontinued prior to ~2018. I still wouldn't approve, but I could imagine its continued existence being considered more bane than boon.
@mihainita5325
@mihainita5325 15 күн бұрын
Same, I wanted to yell "stop doing that!" :-) The only explanation (in my head) was that it is in fact a touch screen (the way it deformed it might be?). So touching it is then a very natural thing to do.
@TheGreatAtario
@TheGreatAtario 15 күн бұрын
@@mihainita5325 Touch screens don't do that light-squish-ripple effect. Also if it were a touch screen I would have expected the touches to be doing clicks and drags and such
@jalsiddharth
@jalsiddharth 16 күн бұрын
OMG ITS MY TA FROM THE COMPUTER GRAPHICS MODULE, LEWIS!!!! LETS GOOO!
@realdarthplagueis
@realdarthplagueis 16 күн бұрын
I remember the Persistence of Vision ray-tracer from the 90s. I was running it on a 486 Intel PC. Every frame took hours to render.
@user-zz6fk8bc8u
@user-zz6fk8bc8u 16 күн бұрын
Me too. It was awesome.
@philp4684
@philp4684 16 күн бұрын
I played with it on my Atari ST. A 320x200 image would take all night, and you'd have to use an image viewer that did clever things with palette switching in timer interrupts to make it display more than 16 colours on screen to even see the result.
@feandil666
@feandil666 16 күн бұрын
yep and now a high end game on a high end gpu can raytrace a 4K image in 15ms (cheating of course, there are things like DLSS that allow the game to use a much small resolution to raytrace that is upscaled after)
@ryan0io
@ryan0io 16 күн бұрын
Who remembers DKBTrace before there was POV-Ray? I ran it on a 386. talk about needing patience.
@Zadster
@Zadster 16 күн бұрын
POV-Ray was incredible. I first used it on my 386SX20. Even rendering 80x60 pixel images needed a LOT of patience. When I got my first 24-bit video card it was mindblowing! That and FractInt really soaked up CPU cycles.
@AySz88
@AySz88 16 күн бұрын
Oof, the question from Brady at 14:10 about the "pixel" units, that's then glossed(ha) over, really does touch upon one of the hard subtle things about raytracing: avoiding infinite loops. If you're not careful, you can make lighting and reflections that don't "conserve energy" and end up unpredictably creating an infinite number of rays, imply an infinite amount of light, or (usually) both! So a ray can't simply be a "pixel" - you have to be pretty careful with precise units ("radiance" vs "radiosity", etc.), making it potentially less forgiving to create new effects with than rasterization shaders. Meanwhile rasterization can start out a lot more intuitive for artists that would like to think of the image like it's a canvas. Raytracing isn't all easier all the time.
@yooyo3d
@yooyo3d 15 күн бұрын
I wrote my first ray tracer ~30 years ago on 486 PC in MSDOS. I used Watcom C compiler to build 32bit code and to use FPU unit on the CPU. My friend and me developed a wide variety of math functions to calculate intersection between line and triangle, sphere, quadratic surface, boxes, even bezier surfaces. Then we developed math functions to describe materials and surfaces, to calculate refractions and reflection. Our material system allow to define object with multiple reflection and refraction index. Then we developed procedural texturing, soft shadows, area lighting, CSG, "blobby" objects, .. we know that slowest thing is scene with triangles so we try hard to describe scene with more or less complex math functions. Part of project was to speed up ray-hit so we tried various algorithms like octree, bounding spheres, bounding boxes and finally we stick with some unique approach to project boundary boxes to world axis and then step through entry/exit points.. like open and closed braces. The scene was described from code itself. We didn't have any 3D editor. I was trying to develop it but eventually I give up. This was long before we had internet access. Then we got internet access and we found POV-Ray.
@DS-rd8ud
@DS-rd8ud 16 күн бұрын
16:01 Arnold warning sign telling people to not touch the robots or the table. The ultimate security measure.
@BrianMcElwain
@BrianMcElwain 16 күн бұрын
A golden opportunity was missed here to explain subsurface scattering par excellence via that 99.9% translucent skin of dear Lewis here.
@TheSliderW
@TheSliderW 15 күн бұрын
And bounce light contributing to object color ans lighting. Oie the green grass lighting him up on the left side :)
@Sora_Halomon
@Sora_Halomon 16 күн бұрын
I know why most computerphiles are filmed indoors, but I really like seeing outdoor footage. It kinda reminds me of the early Number videos filmed in the stadium and roads.
@luispereira628
@luispereira628 12 күн бұрын
Whenever someone who is very passionate about his work speaks you know the video will be great! Great explanation and loved the passion 😊
@DigitalJedi
@DigitalJedi 14 күн бұрын
Great breakdown of how ray tracing works. I'd love to see another video comparing the usual ray-traced approach to things like cone tracing and the other shape-tracing ideas. Would be interesting to see what optimizations and tradeoffs each makes.
@arrowtlg2646
@arrowtlg2646 16 күн бұрын
6:20 man seeing that campus is a throwback! Miss Nottingham!
@250bythepark
@250bythepark 14 күн бұрын
I think you're a great addition to Computerphile, hope you make more videos, really interesting stuff!
@Joseph_Roffey
@Joseph_Roffey 16 күн бұрын
I was sad that he didn’t mention “do mirrors themselves get treated as potential light sources”? Because from the way he described it, it didn’t seem like they would automatically do so, and also because in the final example I was surprised the shadow didn’t seem to change shape when the mirrors were added as surely the light would’ve been able to hit more of the shadow with the mirrors on either side.
@AnttiBrax
@AnttiBrax 16 күн бұрын
Mirrors aren't light sources per se. Whenever the ray hits any surface you basically start the same calculation as you did when you shot the first ray and add that result to the original ray. So a ray that hits a mirror may eventually hit a light source. What was skipped here was that the ray can keep bouncing hundreds of times before it hits a light source and each hit affects the colour of the pixel. And that's why ray tracing is so slow.
@unvergebeneid
@unvergebeneid 16 күн бұрын
Mirrors are just another surface in ray tracing but they are funnily enough the easiest surface to compute. Diffuse materials are much harder because each point needs to generate in theory infinitely many rays itself, whereas a perfect mirror needs only a single ray.
@SMorales851
@SMorales851 16 күн бұрын
No, mirrors are not light sources. For the mirrors to have the effect you described, a less basic raytracer is necessary. Typically, the rays are not bounced directly towards the light (that's more of classic rasterizer thing). Instead, the behavior depends on the type of surface. Smooth, mirror-like surfaces bounce the ray in one specific angle, like mirrors do in real life. Rough surfaces, instead, "split" the ray into smaller rays that shoot out in random directions; the color of the surface is then the sum of the colors returned by those rays. The more of those subrays you have, and the more times they are allowed to split and bounce recursively, the better the image quality (but performance suffers greatly). That random ray bouncing generates what's known as "indirect lighting", which is all light that doesn't come directly from a light source, but instead reflected off of something else first.
@jcm2606
@jcm2606 15 күн бұрын
@@SMorales851 To be pedantic, some raytracers do actually trace rays directly towards light sources. There's an entire optimisation technique called next event estimation where a subset of rays are specifically dedicated to being traced towards known light sources, then the returned energy value is weighted to conserve energy since you technically did introduce some bias to the algorithm by doing this. There's also another optimisation technique called reservoir importance sampling which generalises NEE (specifically as part of multiple importance sampling which combines NEE with BRDF importance sampling) to sample _pixels_ that are known to contribute meaningfully to the image, rather than specifically known light sources (this technique is commonly known as ReSTIR, though reservoir sampling is useful in other areas so ReSTIR isn't the only use of it).
@BruceZempeda
@BruceZempeda 15 күн бұрын
Best computerphile video in a while
@jordantylerflores2993
@jordantylerflores2993 15 күн бұрын
Thank you! This was very informative. Could you do a segment on the differences between Ray Tracing and Path Tracing?
@therealEmpyre
@therealEmpyre 9 күн бұрын
In 1986, I wrote a program that used ray casting, a more primitive form of ray tracing, for a university project. It took several minutes for that 286 to render a simple scene at 320 by 200 by 256 colors in VGA.
@musthavechannel5262
@musthavechannel5262 16 күн бұрын
Obviously it is oversimplified version of ray tracing since it doesn't explain why shadows aren't pitch black
@trevinbeattie4888
@trevinbeattie4888 16 күн бұрын
Ambient lighting :)
@evolutionarytheory
@evolutionarytheory 12 күн бұрын
It's implied. If the lightsource has a non zero width it's implied. If you bounce the ray more than once it's also implied. But he didn't cover stochastic raytracing which would have made it more obvious.
@HarhaMedia
@HarhaMedia 11 күн бұрын
Writing a bunch of raytracers as a hobbyist programmer really helped me understand vectors and matrices.
@totlyepic
@totlyepic 16 күн бұрын
16:22 Dude must want a new monitor the way he's jamming his finger into this one.
@ukbloke28
@ukbloke28 15 күн бұрын
Wait. Where did you get that oldschool printer paper? I used to draw on that as a kid in the 70s, you just gave me mad nostalgia. I want to get hold of some!
@glitchy_weasel
@glitchy_weasel 7 күн бұрын
I think it would be fun for more outdoor Computerphile episodes :)
@shreepads
@shreepads 16 күн бұрын
The explanation feels unsatisfactory, probably in an attempt to keep things simple, e.g. light source occluded at a point would be rendered black but clearly it's picking up diffused light from other objects
@jean-naymar602
@jean-naymar602 16 күн бұрын
I don't think that's diffuse lighting in this specific implementation. It looks like the shadow color is set to some greyish black. You could set the shadow color to any color, it does not need to be black.
@mytech6779
@mytech6779 14 күн бұрын
They call that ambiant lighting and it's just a preset level of light applied to every pixel. Doing true diffuse lighting via raytracing is not possible with current computing power. There are far too many re-reflection calculations. Even the hard shadows will not be accurate in his mirror-room example, because the rays would need to originate at the light source, not at the observer, to have any practical chance of finding all of the primary reflections that would be lighting the "shadow" area, (let alone secondary tertiary reflections). The reflected lighting thing can be done to a limited extent but the rendering time increases several orders of magnitude so it is only used in pre-rendered scenes not realtime gaming.
@shreepads
@shreepads 3 күн бұрын
Thanks!
@joaoguerreiro9403
@joaoguerreiro9403 16 күн бұрын
Computer Science is awesome!
@AlbertoApuCRC
@AlbertoApuCRC 16 күн бұрын
I followed a raytracing course a few years ago... loved it
@WeeeAffandi
@WeeeAffandi 16 күн бұрын
How different is Parh Tracing?
@paull923
@paull923 15 күн бұрын
great video, thank you very much! Regarding 10:01 "How do we figure out what objects we've hit", can you make video about that?
@lolroflmaoization
@lolroflmaoization 16 күн бұрын
Honestly you would get a much better explanation of rasterization and it's limitations as compared to raytracing by watching Alex's videos on Digital Foundry
@karolispavilionis8901
@karolispavilionis8901 16 күн бұрын
In the last comparison with mirrors, shouldn't the shadow on the box be smaller because of light bouncing off the mirror, therefore illuminating below the box?
@cannaroe1213
@cannaroe1213 16 күн бұрын
Mirrors reflect light, but that doesn't make them special. A white surface will probably reflect MORE light, but a mirror keeps the light linear without scattering. So a mirror could reflect less light, because it makes a reflection, but its darker. Does that make snese? So anyway, since everything reflects light not just mirrors, with some average amount of "reflectiveness" based on the material properties, they have something thats the opposite of shadow maps, called light maps, which layers additional light over everything based on the math behind what's emitting light
@jcm2606
@jcm2606 15 күн бұрын
In short, yes, it should. Light from the light source and the rest of the scene should specularly reflect off of the mirror and onto the floor below the box, illuminating the box's shadow. It's not doing that in the raytracer likely because the raytracer is simplistic and isn't handling multiple bounces correctly (if at all), but if it did then you'd naturally see what you're describing (especially with a path tracer, which is the big boy raytracer).
@jcm2606
@jcm2606 15 күн бұрын
@@cannaroe1213 None of this makes any sense. Firstly, mirrors (or, rather, metals) are actually a little special since they're one of the few surfaces that can reflect almost the entirety of incoming light in any outgoing direction, whereas most other surfaces will generally lose some amount of light to diffuse transmission at perpendicular angles (ie angles where you're looking straight down at the surface). Secondly, because mirrors keep the light linear without much scattering (being real pedantic here but there will always be _some_ scattering due to debris on the surface and inner layers of the mirror, and imperfections in the mirror's surface), the light they reflect is typically actually _brighter_ for the outgoing direction since more of the incoming light is exiting in that outgoing direction (light source emits 100% incoming light; incoming light reflects off of a very rough surface, 95+% of the incoming light is scattered in different directions,
@SeamusHarper1234
@SeamusHarper1234 16 күн бұрын
I love how you got the green color all over your hands xD
@toxicbullets10
@toxicbullets10 16 күн бұрын
intuitive explanation
@Bugside
@Bugside 16 күн бұрын
Should have shown bounce light, ambient occlusion, colors affecting other objects. I find it the coolest effects
@jonnydve
@jonnydve 16 күн бұрын
I love that you are using (presumably old stocks) of ininite paper. Growing up I used to draw lots on there because my grandfather had tons of it still lying around (I was born 99)
@zebraforceone
@zebraforceone 16 күн бұрын
A question at @14:20 I see the bounce towards a fixed number of point lights, How does this work in raytracing with emissive surfaces?
@feandil666
@feandil666 16 күн бұрын
same, the surface is just considered a light source directly, so its emissivity is added to what the light would normally be there
@Serjgap
@Serjgap 16 күн бұрын
I am now understanding even less than before
@ChadGatling
@ChadGatling 16 күн бұрын
My first thought is how would ray tracing handle a scene where there is something next to a big red wall or something where you really should be able to see some red reflections but if the ray just hits the car then goes to the sun the ray will never see the wall an not know to tint the car a bit red
@kevingillespie5242
@kevingillespie5242 16 күн бұрын
(i have no graphics experience but) My guess is you can track rays that bounce off the wall and hit the car. Perhaps construct some sort of graph that tracks all the values at each point a ray hits so you can track how much light from the car is supposed to reflect off the wall? But each feature like that will make it more expensive / memory intensive.
@mrlithium69
@mrlithium69 16 күн бұрын
wouldnt apply to this convo. need additional calculations - specular and diffuse reflection
@sephirothbahamut245
@sephirothbahamut245 16 күн бұрын
You do multiple ray bounces. More rays, more bounces = more realistic image. For realtime rendering you mostly stick to 1 or 2 bounces. Stuff like rendering for cinema can easily go past 200 bounces, and hundreds of rays per pixel. That's why they can take days to render a scene.
@BeheadedKamikaze
@BeheadedKamikaze 16 күн бұрын
@@sephirothbahamut245 You're correct, but that process is typically called path tracing to make it distinct from ray tracing, which does not account for this effect.
@AySz88
@AySz88 16 күн бұрын
Ironically this is what the Cornell Box is supposed to test too (note the big red and green side walls). There's also an actual photo of a real life Cornell Box to compare to, where you see the effect you mention.
@omegahaxors3306
@omegahaxors3306 16 күн бұрын
Minecraft still uses rasterization, the ray-tracing thing was a mod made by a graphics card company as part of a marketing campaign. They made mods for a bunch of other games too, from what I understood they had an API that let them hook into the lighting engine.
@ocamlmail
@ocamlmail 8 күн бұрын
Tremendously cool, thank you!
@Kane0123
@Kane0123 16 күн бұрын
“Now you can see why that’s more efficient” - another line added to my CV.
@skyscraperfan
@skyscraperfan 16 күн бұрын
Doesn't it get complicated, if a ray hits a rough surface and is diffused in many directions?
@user-zz6fk8bc8u
@user-zz6fk8bc8u 16 күн бұрын
Yes in practice (because real ray tracing is still more complicated than the shown examples) but in theory it's simple. Just shoot more rays per pixel and if you hit a surface you randomize the continuation path based on the roughness. This way you can even render stuff like foggy half transparent glass.
@skyscraperfan
@skyscraperfan 16 күн бұрын
@@user-zz6fk8bc8u That sounds like a lot of computation per pixel, because if some of those rays hit another rough surface, you would need even more rays.
@CubicSpline7713
@CubicSpline7713 16 күн бұрын
@@skyscraperfan There is a cut off point obviously, otherwise it would never finish.
@SteelSkin667
@SteelSkin667 16 күн бұрын
@@skyscraperfan That is why in practice rough materials are more expensive to trace against. In games where RT is only used for reflections there is often a roughness cutoff, and sometimes the roughness is even faked by just blurring the reflection.
@morlankey
@morlankey 16 күн бұрын
Why is the ray-traced blue box casting a shadow below it but isn't lighter on top?
@scaredyfish
@scaredyfish 13 күн бұрын
I’d like to know more about how modern game engines use ray tracing. As I understand it, it’s still rasterised and the ray tracing is an additional step. They do a low resolution ray trace of the scene that’s then denoised and used as a lighting pass - that’s how it can run at game frame rates. Is that understanding correct?
@frickxnas
@frickxnas 16 күн бұрын
Ray tracing for rendering and ray picking for mouse are incredible. Been using them since 2014
@j7ndominica051
@j7ndominica051 15 күн бұрын
His head is filling the Zed-buffer. Where does diffuse light come from, which has been reflected from other nearby objects?
@erikziak1249
@erikziak1249 16 күн бұрын
Why is the shadow grey and not black then if you do not render anything there?
@Roxor128
@Roxor128 16 күн бұрын
And let's not forget the mad geniuses of the Demoscene who pulled off real-time ray tracing as far back as 1995! Recommended viewing: Transgression 2 by MFX (1996), Heaven 7 by Exceed (2000), and Still Sucking Nature by Federation Against Nature (2003). All of them have video recordings uploaded to KZbin, so just plug the titles into the search box.
@bishboria
@bishboria 15 күн бұрын
Heaven 7 was/is amazing
@yooyo3d
@yooyo3d 15 күн бұрын
TGR2 by MFX intro didn't use any ray tracing. It was very effective fake.
@Roxor128
@Roxor128 15 күн бұрын
@@yooyo3d Citation? Better still, how about an annotated disassembly walking us through what it actually does?
@nickthane
@nickthane 15 күн бұрын
Highly recommend checking Sebastian Lague’s video where he builds a raytracing renderer step by step.
@thenoblerot
@thenoblerot 16 күн бұрын
I started ray tracing with POV-ray on my 386/387 with 2mb ram. Hours or even days for 320x240 image!
@unvergebeneid
@unvergebeneid 16 күн бұрын
Would've been nice to compare the toy ray tracer he built as an undergrad to an actual ray tracer with bounce lighting. Just to illustrate how much the explanation in this video only barely scratches the surface.
@jonnypanteloni
@jonnypanteloni 16 күн бұрын
I can finally gather my steps and talk to hilbert about how I just can't stop dithering on this topic. Maybe I should get my buckets in order.
@OnionKnight541
@OnionKnight541 16 күн бұрын
can you please do a part II of this video using (say) apple's SceneKit / Vision frameworks, so that we as devs can see how that is implemented properly?
@emwave100
@emwave100 16 күн бұрын
I vote for more computer graphics videos. I am surprised ray tracing hasn't been covered on this channel before
@omegahaxors3306
@omegahaxors3306 16 күн бұрын
If you've ever played minecraft or a shooter you've used a ray trace, because that's how the game knows what you're targeting.
@jojox1904
@jojox1904 9 күн бұрын
Can you do another video discussing the chances and dangers of AI? The last videos on this I saw from your channel were from 8 years ago and I'd be curious about an updated view on this
@Parax77
@Parax77 16 күн бұрын
in that last scene.. whilst the view was reflected in the mirror the light source was not? how come?
@jcm2606
@jcm2606 15 күн бұрын
Would need a better image to really know for sure since the angle of the camera wouldn't have allowed us to see the light source to begin with (it looked like the light source was at the center of the ceiling, whereas we could at best see the far left and right sides), but it could have also been that the raytracer just wasn't set up for it to begin with. The style of raytracing that he was describing will generally treat all light sources as infinitely small points, so at best you'll only get one or two pixels that represent the light source, which may have been why we couldn't see it. Generally in that case you need to either trace rays in random directions within a cone pointed at the light source (which simulates a spherical light source with a specific radius), or you need to have the shaders set up to handle glossy surfaces to "blur" the reflection of the light source out across the surface.
@badcrab7494
@badcrab7494 15 күн бұрын
In future with more powerful computers, would you do ray tracing the correct way round with rays starting at the light source rather than the camera?
@Juansonos
@Juansonos 15 күн бұрын
I believe that we start at camera to control how many calculations are needed to be done. Starting rays at light source sends more rays to more places and a lot of those would not be useful to us rendering a scene from the camera's vantage point. Leading to wasted computationsthat did nothing to make the result better.
@richardwigley
@richardwigley 16 күн бұрын
Tell you me you didn’t pay for your monitor without telling me you didn’t pay for your monitor….
@muhammadsiddiqui2244
@muhammadsiddiqui2244 15 күн бұрын
It's the first I have seen a computer scientist "outside" 🤣
@MatthewHoworko
@MatthewHoworko 16 күн бұрын
Can we just appreciate how you don't hear the wind in the audio for the outside demo section
@bengoodwin2141
@bengoodwin2141 16 күн бұрын
They mentioned Minecraft using Ray tracing, there is an experimental version of the game that uses it, but the main version of the game still uses rasterization. There are also fan made modifications that add ray tracing and/or extra shaders to make the game look nicer as well.
@DavidDLee
@DavidDLee 15 күн бұрын
How much of this printer paper do you still have? Do you still use a printer which takes them?
@MKBlackbird
@MKBlackbird 16 күн бұрын
Ray tracing in modern games are optimized by shooting fewer rays and then "guessing" how it would have looked with all the rays using AI. It is super cool how that makes ray tracing actually feasible for games. Now another interesting approach is bringing in a diffusion model. By either dreaming up the final image from a color coded (cheaply rendered rasterization) segmentation of the scene or just adding a final touch on top of a normal rendered frame. I imagine the diffusion models and other similar approches will become increasingly fast to actually make this possible. It would be like SweetFX with style transfer.
@jonsen2k
@jonsen2k 16 күн бұрын
We're still using rasterization at the bottom as well, aren't we? I thought ray tracing was only used to get shadows and reflections and stuff more lifelike on top of the else rasterized frame.
@MKBlackbird
@MKBlackbird 16 күн бұрын
@@jonsen2k Yes, that's true.
@jcm2606
@jcm2606 15 күн бұрын
AI was really only introduced recently with NVIDIA's ray reconstruction, and even that more so seems to be just a neural network performing the same work that a traditional denoiser does. Outside of RR, games tend to use traditional denoisers like SVGF or A-SVGF, which don't use any AI at all. Typically they'll use an accumulation pre-pass, where raw raytracer outputs are fed into a running average spanning multiple frames (anywhere from half a dozen to possibly 100+ frames) to try to gather as many samples as possible across time, then they'll feed the output of the accumulation pre-pass into a series of noise/variance-aware spatial filters which selectively blur parts of the image that are considered to be too noisy.
@DripDripDrip69
@DripDripDrip69 14 күн бұрын
@@jonsen2k There are different levels of implementation, some games only have a few ray traced effects like RT shadows and RT reflections, some have RT global illumination to replace light probe based rasterization GI (the most transformative RT effect imo), so they have ray traced effects slapped on top of rasterized image. Some go all out with path tracing like Cyberpunk and Allen Wake 2, those have very few rasterized components apart from the primary view, and if I'm remembering it right in Portal RTX and Quake RTX even the primary view is ray traced so there's no rasterization whatsoever.
@user-lh6ig5gj4e
@user-lh6ig5gj4e 7 күн бұрын
When they went outside, I was expecting them to say that they had discovered that grass exists
@kenjinks5465
@kenjinks5465 16 күн бұрын
Instead of rays, could we just trace the frustrum? Look for plane/frustum intersections, fragment the frustrum by the plane intersections, generate new frustum from planes... return with vectorized output not rasterized, much smaller?
@robertkelleher1850
@robertkelleher1850 16 күн бұрын
I'm surprised we can see anything with all the fingerprints that must be on that monitor.
@sbmoonbeam
@sbmoonbeam 16 күн бұрын
I think your modern games rendering pipeline for something like cyberpunk 2077 will be a blend of these techniques with physically based rendering (PBR) pipeline using a rastering pathway enhanced by using ray tracing (using your RTX/compute pipeline pathway) to calculate reflectance and refraction effects rather than calculating every pixel in the render from first principles.
@TESRG35
@TESRG35 15 күн бұрын
Wouldn't you need to also trace a line from the shadow back to the light source *through the mirror*?
@jcm2606
@jcm2606 14 күн бұрын
Yes, though typically it's not framed that way. Typically with this style of raytracing (recursive raytracing) you "shift the frame of reference", so to speak, each time you start "processing" a new ray, so it's more like you tracing a line from the mirror to the floor just in front of the blue cube, then tracing a new line back to the light source and "passing the light" back to the mirror and eventually back to the camera.
@bw6378
@bw6378 16 күн бұрын
Anyone else remember POVray from way back when? lol
@Roxor128
@Roxor128 16 күн бұрын
Done a lot of mucking around with it. Even produced a few scenes that look nice.
@zxuiji
@zxuiji 16 күн бұрын
4:23, uh, couldn't you just check the fragment's position BEFORE deciding to colour it? You calculate the position, check against the buffer position, if it's not nearer you just move onto the next fragment, otherwise begin identifying what colour to give it.
@flippert0
@flippert0 9 күн бұрын
Plot twist: it's UK, so the brightly lit day outside was of course all CGI
@michaelsmith4904
@michaelsmith4904 16 күн бұрын
how does this relate to ray casting?
@zxuiji
@zxuiji 16 күн бұрын
5:50, Occurs to me at this point the fragments could be a way of checking for objects in the way. If for example you've already calculated a fragment to be nearer to the camera than the one doing but the one doing should've effected the fragment that's been chosen then could just take on that fragments details and apply the reduction in lighting before finally overriding the nearer fragment with the current fragment that's now mascarading as the original. Naturally if the fragment you're working on is nearer then you can just apply the lighting reduction normally and carry on.
@jcm2606
@jcm2606 15 күн бұрын
This is an actual technique and is called screen-space raytracing or screen-space raymarching. Basically, a ray is marched across the screen's depth buffer until it either reaches its destination or the ray goes behind a pixel (which is determined by comparing the ray's depth to the pixel's), with different methods of handling each case depending on what it's being used for. The problem with this technique is that you don't know how "thick" the pixel is, so you don't know if you've _actually_ hit the object that the pixel belongs to or if you're sufficiently far behind it to not have hit it. You can sort of approximate the thickness in a couple different ways, but you'll always end up with false positives where the ray thinks it missed the object when it actually hit it (or vice versa), plus you don't know what other objects are behind the pixel so you don't know if you've hit some other object. For that reason it's not the best technique to use for shadows (it can work sometimes so some games do use it as a fallback to a main shadowing technique), but it _is_ commonly used for reflections since, for most surfaces, reflections are primarily visible at grazing angles where the limitations of the technique aren't as painful to deal with (doing reflections this way is called screen-space reflections).
@zxuiji
@zxuiji 15 күн бұрын
@@jcm2606dam yt and it's shadow deleting. I replied to this once and my post still isn't there. I basically said both problems are solvable. The pixel thickness by distance from camera and the multiple object thing by a shadow pixel that accumulates pixel values to apply
@jcm2606
@jcm2606 15 күн бұрын
*"The pixel thickness by distance from camera"* This is a very crude approximation of thickness, more crude than the industry standard of just having a fixed depth threshold. The thickness is meant to be measuring how long the object is along the screen's Z axis, so basing it off of the distance to the camera isn't correct as distance to the camera doesn't affect how long the object is along the screen's Z axis (ignoring perspective, which is already accounted for by coordinate space transformations). *"the multiple object thing by a shadow pixel that accumulates pixel values to apply"* This won't do anything at all as what we're looking for is what range of depths actually contain objects. Say we had a depth buffer that was in the range [0, 1]. We had three objects positioned at the current pixel, with the first object occupying depth range [0.1, 0.2], the second object occupying depth range [0.4, 0.6], and the third object occupying depth range [0.7, 1.0]. What we'd like to know is if a ray has hit an object (ie is in one of those three ranges) or has missed all objects (ie is outside of those three ranges), but the problem is that the depth buffer can only store the _minimum_ value of all three of these ranges, which in this case is 0.1. Even though we know based on intuition that there's gaps between each of these ranges, the depth buffer can't store anything more than the minimum of all ranges, so the ray only ever sees that there's nothing in the depth range [0, 0.1] and something in the depth range [0.1, 1.0]. It has no idea what's behind depth 0.1. There are a few different techniques that try to address this, but none are perfect. The simplest technique would be deep depth buffers which allows you to store multiple depth values in a depth buffer as separate depth samples. This would let you at least store multiple minimums to get a better idea of the scene composition (especially if you were to combine deep depth buffers with a dedicated back face pass to store the depths of back faces in addition to front faces, letting you get both parts of each object's depth range), but it limits you to a specific number of depth values (2, 4, 8, 16, 32, etc) and each new depth value you add increases the memory footprint of the depth buffer by an additional 1x (ie 10 depth values = 10x memory footprint), so it's impractical for this alone (since deep depth buffers were intended to be used with transparent objects, so using them for screen-space raytracing would add even more memory usage).
@zxuiji
@zxuiji 15 күн бұрын
@@jcm2606 Still reading your msg but pixels do not need to know the object length, they're always a fixed size and since normalisation smplifies logic (can apply the camera dimensions after) it's always better to treat the pixels at the camera as 1x1x1 and scale down(+Z) or skip (-Z) based on distance from camera
@zxuiji
@zxuiji 15 күн бұрын
@@jcm2606 For the depth thing the shadow pixel DOES work, remember each pixel starts off assuming it has no objects in front of it. If it's further from the camera it's hidden by the pixel in front so it's values just get added to the accumulator. If it's closer then it's values get added to the accumulator and the colour values set with 0 light applied yet. Once all camera pixels and shadow pixels have been set then the pixels of just the camera are looped to apply the accumulated values are applied to the colour of the pixel. So if fragment 1 takes 0.1 light, 2 takes 0.3 and 3 takes 0.4 then there's 0.2 left to multiply against the colours. I'll move onto reading the last chunk of your msg now
@fttmvp
@fttmvp 16 күн бұрын
Lol for a second I thought it was Lord Miles from a quick glance at the thumbnail.
@ares106
@ares106 16 күн бұрын
6:30 but can you imagine actually going outside?
@agoatmannameddesire8856
@agoatmannameddesire8856 16 күн бұрын
Do ambient occlusion next :)
@cannaroe1213
@cannaroe1213 16 күн бұрын
No yur an ambient occlusion.
@Gosu9765
@Gosu9765 16 күн бұрын
I don't think this video conveyed the subject well. What I got from it is that RT looks much better (tho as explained it seemed like you can get only the minecraft graphics with rasterisers, which is so far from truth) and that RT is slower, but simpler. Is it? All the ways to optimize BVH structures, accumulate traces across frames for real time graphics and then to cleanup artefacts from all the shortcuts you took. Studios clearly struggle with this - it's probably one of the hardest things to pull of well now in the gaming industry when it comes to graphics. What really should have been shown are the limitations of rasterisation techniques as that's the reason industry is moving towards RT now - SSR occlusion artefacts, shadow map resolutions, light bleeding through objects, etc. (no need to explain those - they could simply be shown). The only thing that was shown was that RT can reflect things that are not in screen space, but even that wasn't explained as limitation of rasterisation. As a simple gamer I'm kinda disappointed as I've seen other simple gamers explaining this much better.
@mouhamadouseydoudiop8957
@mouhamadouseydoudiop8957 16 күн бұрын
Nice
@Scenesencive
@Scenesencive 16 күн бұрын
Field trip daaaayyyy!
@Scenesencive
@Scenesencive 16 күн бұрын
Interesting video , I am fairly familiar with the subject , felt like there was maybe just one interesting key point missing of raytracing , or even in general light , witch is ofc GI , indirect lighting , that in theory we would like to cast on every hit point and infinite amount of new rays and again on every single hit point, and again; etc. That ultimately every surface basically is a verry rough form a mirror , wich results in the underside of the car not be completely black even though there is no direct ray to any light source , and the wall next to the bright red cube have a red tinted ambient light in games other than reflections this seems the most frequent usecase for rt especially when baking lightmaps is challenging like in open world games, that prolly the subjects you mention in the 1 hour directors cut prolly! gg
@HeilTec
@HeilTec 16 күн бұрын
Reflected light is the hardest.
@paulmitchell2916
@paulmitchell2916 15 күн бұрын
Has anyone heard of ray tracing used for enhanced audio reverb?
@Ceelvain
@Ceelvain 15 күн бұрын
Looks like this video raises a lot of emotions. I hope you see there's a content mine here. ^^
@jeromethiel4323
@jeromethiel4323 16 күн бұрын
What's being described here is what i have always heard referred to as "ray casting." Because it eliminates a lot of unnecessary calculations that used to be done with Ray casting. Ray casting, classically, traced the rays from the light source. Which is inefficient, since a lot of those rays will never hit the "camera." I remember ray casting software on the Amiga, and it was glacially slow. While a similar program using ray casting was much, much faster.
@4santa391
@4santa391 16 күн бұрын
am I the only one getting triggered by the screen touching? 😆 16:15
@realdarthplagueis
@realdarthplagueis 16 күн бұрын
Agreed.
@rich1051414
@rich1051414 16 күн бұрын
I am not sure how he has so much marker ink on his hands as well... There is something about this guy that triggers me 100 different ways, but I am trying to keep it bottled down.
@aquacruisedb
@aquacruisedb 16 күн бұрын
Wonder if there is anyone old enough a computerphile to remember what that continuous feed dot matrix paper is actually for?! (other than marker pen crazy idea sketches)
@mettemfurfur7691
@mettemfurfur7691 16 күн бұрын
cool
@ishaan863
@ishaan863 15 күн бұрын
11:11 something in the way
@dinsm8re
@dinsm8re 16 күн бұрын
couldn’t have asked for better timing
@4.0.4
@4.0.4 16 күн бұрын
I had to do a double take to make sure this video wasn't from 2018.
@elirane85
@elirane85 16 күн бұрын
I remember more then 20 years ago, I was learning to program video-games, so I read a book about "real-time graphics". The first chapter was about ray-tracing, it was only about 5 pages, and it basically covered most of the theory behind it and even had a full implementation which was less then 1 page long. But at the end of the chapter it said something like: "But this approach can take hours to render a single frame, so this technic is only good for pre-rendering on massive server farms, and the next 300 pages will teach you how to fake it" 😋
@AlmerosMusicCode
@AlmerosMusicCode 16 күн бұрын
That's a fantastic approach for explaining the subject! Must have been a great read.
@megamangos7408
@megamangos7408 16 күн бұрын
Wait, is this the first Computerphile where they actually went outside to touch grass?
@W1ngSMC
@W1ngSMC 15 күн бұрын
Are you calculating the raytraced scene on a CPU? No way it's that slow on a GPU.
@RayRay-kz1ms
@RayRay-kz1ms 13 күн бұрын
This is under the assumption that lights travel instantaneously, which is sometimes inaccurate
@fotoni0s
@fotoni0s 9 күн бұрын
Thumbs up for Rubber duck debugging :p
@trevinbeattie4888
@trevinbeattie4888 16 күн бұрын
Let's go outside In the sunshine I know you want to
How do Video Game Graphics Work?
21:00
Branch Education
Рет қаралды 3,2 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 154 М.
어른의 힘으로만 할 수 있는 버블티 마시는법
00:15
진영민yeongmin
Рет қаралды 7 МЛН
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 106 М.
Is RTX a Total Waste of Money?? - Can we even tell when it's on?
15:10
Linus Tech Tips
Рет қаралды 3,9 МЛН
When a complicated proof simplifies everything
6:22
Stand-up Maths
Рет қаралды 197 М.
Machine Code Explained - Computerphile
20:32
Computerphile
Рет қаралды 106 М.
What is Ray Tracing?
5:23
Techquickie
Рет қаралды 1,2 МЛН
343867 and Tetrahedral Numbers - Numberphile
12:04
Numberphile
Рет қаралды 128 М.
NVIDIA Just Supercharged Ray Tracing!
6:59
Two Minute Papers
Рет қаралды 106 М.
Understanding B-Trees: The Data Structure Behind Modern Databases
12:39
Coding Adventure: Compute Shaders
5:25
Sebastian Lague
Рет қаралды 778 М.