Intel’s New AI: Amazing Ray Tracing Results! ☀️

  Рет қаралды 125,505

Two Minute Papers

Two Minute Papers

Күн бұрын

Пікірлер: 326
@Ila_Aras
@Ila_Aras 2 жыл бұрын
This denoising technique is absolutely fantastic! 200 frames per second? That unbelievable. I really hope Blender gets it's hands on this technique. Denoising is very important, but doesn't get as much attention as it should. I'm so glad to see how it's evolved. Thank you 2MP, and of course, what a time to be alive!
@TwoMinutePapers
@TwoMinutePapers 2 жыл бұрын
You are very kind, thank you so much! 🙏
@petterlarsson7257
@petterlarsson7257 2 жыл бұрын
It's pretty sad that most amazing technology isn't open source, imagine how fun people could have with them
@ty_teynium
@ty_teynium 2 жыл бұрын
Thank you for mentioning Blender. I was thinking the same thing since my Blender renders and animations always have some noise in them. I'd love to do some more animating if not for this issue. I do hope it gets released soon.
@little_lord_tam
@little_lord_tam 2 жыл бұрын
@@petterlarsson7257 Non. No one would have money for developing the tools so they wouldnt exist
@ClintochX
@ClintochX 2 жыл бұрын
@@petterlarsson7257 bro, what you're looking at is OIDN (Intel Open Image DeNoise) and its completely Free and open source. Blender already uses it, but now it's time for an upgrade
@arantes6
@arantes6 2 жыл бұрын
I'd appreciate just a small explanation of what's the new idea of the new techniques presented in the videos : how it works under the hood, in a few sentences. Is it a neural network, where the previous techniques were hand-crafted? Is it a new kind of neural net architecture? Is it just a math operation that nobody thought of applying to this problem before? Just a glimpse behind the technical curtain, instead of just the results, would make these amazing videos even better!
@amoliski
@amoliski 2 жыл бұрын
Sounds like we need two minute papers for a quick overview and five minute papers for a slightly more technical explanation. Maybe have to paper author do a quick interview showing their work if they are available?
@liambury529
@liambury529 2 жыл бұрын
The paper is called "Temporally Stable Real-Time Joint Neural Denoising and Supersampling", and if you're that curious, the link to the paper is in the video description.
@itsd0nk
@itsd0nk 2 жыл бұрын
I second this motion.
@facts9144
@facts9144 2 жыл бұрын
@@liambury529 wouldn’t hurt just to give a extra little bit of info in the video tho
@OmniscientOCE
@OmniscientOCE 2 жыл бұрын
@@facts9144 I concur.
@StanleyKubick1
@StanleyKubick1 2 жыл бұрын
next gen ARC gpu's starting to look pretty desirable
@danielb.4205
@danielb.4205 2 жыл бұрын
A direct comparison between Intels vs NVIDIAs approach (OptiX) would be interesting.
@woodenfigurines
@woodenfigurines 2 жыл бұрын
I think and hope you'll get your comparison a few days after this hits the market in intel cards :D
@Dayanto
@Dayanto 2 жыл бұрын
It's not really fair to compare with the outdated SVGF algorithm when the followup A-SVGF paper solved the main shortcoming of the original paper (significant ghosting/smearing during motion or lighting changes) already 4 years ago. For example, A-SVGF is what was used in Quake 2 RTX back when real time ray tracing was still new.
@raylopez99
@raylopez99 2 жыл бұрын
There's no fair in science. Peer review is very unfair at times and ruthless. Just the way it works. Speaking as a two minute scholar who does not read the original sources.
@Exilum
@Exilum 2 жыл бұрын
It was labelled as SVGF, but also as being from 2020. More likely to be a variation of it. SVGF is a 2017 paper, A-SVGF is a 2018 paper.
@Dayanto
@Dayanto 2 жыл бұрын
@@Exilum The one from 2020 was a different paper labeled "NBG". (Neural Bilateral Grid)
@Exilum
@Exilum 2 жыл бұрын
@@Dayanto ok, my bad on that
@alegzzis
@alegzzis Жыл бұрын
I will add more, it's not fair because Nvidia has an even newer denoiser "nVidia NRD"
@black_platypus
@black_platypus 2 жыл бұрын
We barely ever hear about the inner workings anymore :( Please bring back at least a structural overview or abstract! Two Minute Papers > Two Minute "Neat, isn't it"! 😊
@draco6349
@draco6349 2 жыл бұрын
I want to see this used in tandem with ReStir. An amazing path-tracing algorithm that leaves barely any noise combined with an amazing denoiser should be able to get breathtaking results, and even better, in real-time.
@draco6349
@draco6349 2 жыл бұрын
@@dylankuzmick3122 That's actually so cool. I might literally just install that, real-time path-tracing in Blender is something I've always wanted.
@xamomax
@xamomax 2 жыл бұрын
It seems that neural network based denoising could benefit from an input reference image rendered very fast without raytracing, but with high detail. Then, this high detail image plus the raytracing image can be used by the denoiser to get the lighting correct without washing out the detail.
@brianjacobs2748
@brianjacobs2748 2 жыл бұрын
that sounds cool but it wouldn’t be as authentic
@xamomax
@xamomax 2 жыл бұрын
@@brianjacobs2748 depends on how it is trained.
@circuit10
@circuit10 2 жыл бұрын
@@brianjacobs2748 More authentic than no raytracing at all, which is the alternative
@circuit10
@circuit10 2 жыл бұрын
I think this would also be really good for DLSS 3 because the frame could be generated without much latency based on actual data rather than interpolating between two frames
@MRTOWELRACK
@MRTOWELRACK 2 жыл бұрын
The neural network is trained according to reference input. The renderer already knows the geometry and blurs the noise accordingly.
@myNamezMe
@myNamezMe 2 жыл бұрын
Impressive progress, will be interesting to see the next paper.
@sebastianjost
@sebastianjost 2 жыл бұрын
absolutely fantastic! In some of the examples I can definitely still see some room for improvement and a follow-up video, but this is still remarkable improvement. However, I find some of the comparisons a bit difficult to interpret. I wish the same videos with different techniques were shown side-by side rather than one after the other. Ideally even with just one video and a moving split showing the two techniques rendering parts of the video.
@roccov3614
@roccov3614 2 жыл бұрын
This is a brilliant idea for real time games. Mixing a quick light transport video with a quick light transport optimised noise filter, for a quality real time output is brilliant.
@13squared2009
@13squared2009 2 жыл бұрын
I’d love a video taking us under the covers to see what it is these authors are actually doing to improve these techniques… I just don’t see myself reading the actual papers as a casual viewer. Love your videos!
@juliandarley
@juliandarley 2 жыл бұрын
i would love to know if or when this could be applied to Blender/CyclesX so that 1) we can have much reduced render times of photoreal scenes and 2) reasonably good caustics.
@shmendez_
@shmendez_ 2 жыл бұрын
Wait wait wait bro how is your comment 7 hr old but the video is only 3 min old??
@Ben_CY123
@Ben_CY123 2 жыл бұрын
@@shmendez_ em….maybe timezone issue?
@nullneutrality8047
@nullneutrality8047 2 жыл бұрын
@@shmendez_ videos are shown early on patreon
@MattPin
@MattPin 2 жыл бұрын
Honesty once this gets included into blender it's going to be very good, as I can see this ai Denoiser will definitely help render frames faster, it would be very cool to see a comparison of this technique with their other Denoiser, openimagedenoise.
@computerconcepts3352
@computerconcepts3352 2 жыл бұрын
yeah lol
@Sekir80
@Sekir80 2 жыл бұрын
I'd like to see the result of something slower, say, give the renderer 1 second, way less noisy, maybe that way the upsampled result is closer to the reference. For clarity: I'm not really interested in real time image generation if we are talking about minute/hour/day long rendering challenges. I'm interested in great quality results.
@Sekir80
@Sekir80 2 жыл бұрын
@michael Interesting insight! I was more cynical with 3D visualization and figured I'll do some crappy commercial which I disliked, so never entered this space. Mediocrity. I even see it in AAA games: for example I tend (past tense) to model stuff for specific rendering quality, if I see a racing simulator where the steering wheel is a visual n-gon I just scoff. I'd rather spend the polygon budget on them most obvious things. Maybe I'm weird.
@Sadiinso
@Sadiinso 2 жыл бұрын
You forgot to mention that the 200fps (5.19ms per frame as shown in the paper) was observed when running on an rtx 3070. Performance and runtime are not absolute measures and are related to the hardware on which the workload is running.
@AIpha7387
@AIpha7387 2 жыл бұрын
5:42 It seems to me that the reflective material on the surface is being ignored. It was removed during de-noising.
@AIpha7387
@AIpha7387 2 жыл бұрын
It is completely different from the gloss intended by the developer. This can't be used in the game.
@harnageaa
@harnageaa 2 жыл бұрын
I think they might make a paper to add reflection to all type of materials, so you have 2 algorithms in one, that probably might solve the issue
@notapplicable7292
@notapplicable7292 2 жыл бұрын
Its rather incredible how many times I've listened to him explain noise in ray traced lighting
@aidanm5578
@aidanm5578 2 жыл бұрын
Finally, I don't have to go 'outside' to see realistic lighting.
@zzoldd
@zzoldd 2 жыл бұрын
A redditors true dream
@mosog8829
@mosog8829 2 жыл бұрын
Am glad this is going to be implemented in blender, as they are already working on it.
@juliandarley
@juliandarley 2 жыл бұрын
can you provide a link, pls?
@juliandarley
@juliandarley 2 жыл бұрын
@@mosog8829 many thanks. have not looked at the 3.4 alpha yet. release notes say that path guiding works only with CPU, but GPU support coming in future. may be worth doing some comparison tests with CPU only, but obviously real gain will be with GPU.
@mosog8829
@mosog8829 2 жыл бұрын
@@juliandarley welcome. Indeed. It will be even better if it's possible to combine CPU and GPU.
@kleyyer
@kleyyer 2 жыл бұрын
You might be confusing this with the new Pathguiding feature for fireflies that will be implemented into Cycles in the future. I have seen absolutely nothing about this new denoising being implemented into Blender
@juliandarley
@juliandarley 2 жыл бұрын
​@@kleyyer it wasn't my suggestion, but i can ask blender hq about it. i did look at the new alpha and it did not look the same as what is show here. i still keep hoping for good caustics from cycles. for me, needing photoreal renders, it is the number one thing that lets cycles down.
@jinushaun
@jinushaun 2 жыл бұрын
The fact that it’s able to reconstruct the no smoking sign from that noisy mess is mind blowing.
@Eternal_23
@Eternal_23 2 жыл бұрын
This+RESTIR=Realtime CG
@erikals
@erikals 2 жыл бұрын
4:45 info; this paper is from 2020. (not 2014)
@bryanharrison3889
@bryanharrison3889 2 жыл бұрын
I love these videos... even if I WASN'T a 3d animator, I'd still enjoy them because of Karoly's passion for the subject of A.I., computer graphics, and machine learning.
@MarshalerOfficial
@MarshalerOfficial 2 жыл бұрын
Those 2 years years ago were like 20 years imo. Sharpening images are the new sexy again. But this dosen't mean that raytracing wasn't boring either, it's damn impressive what we had come from What a time to be alive boys and girls!
@joshuawhitworth6456
@joshuawhitworth6456 2 жыл бұрын
I can't see this getting much better. What will be getting better is unbiased render engines that will one day simulate light in real time without filtering techniques.
@MRTOWELRACK
@MRTOWELRACK 2 жыл бұрын
That would be brute force path tracing, which is orders of magnitude more demanding and not viable with existing technology.
@fanjapanischermusik
@fanjapanischermusik 2 жыл бұрын
when can I expect this to be used in my smartphone? while taking photos for example. looks really good.
@blackbriarmead1966
@blackbriarmead1966 2 жыл бұрын
I'm taking a computer graphics course this semester. After making a CPU (no lighting or raytracing) renderer that takes a few seconds to render one frame of a very simple scene, the sheer immensity of the number of calculations required for realistic scenes set in. There is a whole stack of technology to make stuff like this possible, from the 5 nm silicon, GPU architecture, software optimization, and creative techniques to reduce the number of calculations necessary. And the layperson takes all of this for granted when they play cyberpunk 2077 on their 4090
@JimBob1937
@JimBob1937 2 жыл бұрын
Not to mention the entire electronics platform that it is running on. The signal transmission to the monitor... the monitor technology. Yeah, people's heads would explode if they attempted to comprehend the technology and functioning it takes for a single frame of a game to be shown to them.
@custos3249
@custos3249 2 жыл бұрын
Hold up. At 5:08 and more at 5:14, is that noise I see in the reference that the program removed?
@adrianm7203
@adrianm7203 2 жыл бұрын
It would have been nice if this video talked about how this was accomplished. The results are interesting but I'm more interested in how it works...
@michaelleue7594
@michaelleue7594 2 жыл бұрын
It's called 2-minute papers, not 2-semester papers.
@alex15095
@alex15095 2 жыл бұрын
@@michaelleue7594 Cut out a minute of the same footage that was looped 22 times in the video, and roughly explain what architectures they're using, how it was trained, show some diagrams or graphs from the paper that people can pause to have a deeper look, etc
@ryanmccampbell7
@ryanmccampbell7 2 жыл бұрын
I wonder if these techniques can be combined with traditional rasterization-based rendering, which should produce nice sharp edges and avoid any blurriness produced by the denoising. I imagine if you run both a coarse light simulation and a rasterization step, then combine the results with a neural network, you can get the best of both worlds.
@alexmcleod4330
@alexmcleod4330 2 жыл бұрын
It's already rasterised as a first step, to produce a G-buffer of surface normals, texture & colour information. That's done with Unreal Engine 4, in this case. That rasterised layer then informs the raytracing, supersampling and denoising steps. The key idea here is that they're combining the supersampling and denoising into one process. It seems like they didn't really want to pre-process the reference scenery too much, but you could probably get a better balance of sharpness & smoothness by giving the hero meshes some invisible texture maps that just say "stay sharp here, it's a human face", and "don't worry about this, it's just the floor".
@ryanmccampbell7
@ryanmccampbell7 2 жыл бұрын
@@alexmcleod4330 I see, interesting.
@drednac
@drednac 2 жыл бұрын
Wow, this is mind-blowing. The progress that the AI algorithms make in recent times is literally unbelievable. Every day there is less and less reason to focus on anything else in technology but machine learning.
@michaelleue7594
@michaelleue7594 2 жыл бұрын
Well, ML is great but its most interesting applications are when it gets used to focus on other things in technology. The field of machine learning itself is advancing rapidly, but there's a wide world of applications for it in other tech fields that are even more fertile ground for advancements.
@drednac
@drednac 2 жыл бұрын
@@michaelleue7594 Years back I was working on a light transport algorithm that was running real-time on the GPU and allowed dynamic lighting and infinite bounces. Now when I look at these advancements I see where things are going. The upscaling tech, and AI "dreaming up" missing details better than a human is a total game changer. Also the ability to generate content. We truly live in exponential times, it's hard to catch up. Whatever you can work on today that seems interesting will be so obsolete next friday ..
@JorgetePanete
@JorgetePanete 2 жыл бұрын
​​@@drednac It's unbelievable that that's not even an exaggeration, and we don't even know what we'll have once quantum and photon computing open new possibilities
@drednac
@drednac 2 жыл бұрын
@@JorgetePanete We know what comes next .. Skynet :DDDD
@JorgetePanete
@JorgetePanete 2 жыл бұрын
@@drednac Once fiction, soon reality, it just takes one man to connect a robot to the internet, put an AI that transforms natural language into instructions, and add a task system to let it discover how to do anything and go do it instantly
@AronRubin
@AronRubin 2 жыл бұрын
Do we no longer collect per-fragment info on whether how fully a ray(s) was incident? It seems like you could just in-paint low ranking fragments instead of smoothing a whole scene.
@sikliztailbunch
@sikliztailbunch 2 жыл бұрын
If the new method allows for 200 fps in that example scene, wouldn´t it make sense to limit it to 60 fps to gather even more samples in the first phase before denoising it?
@JimBob1937
@JimBob1937 2 жыл бұрын
There is still other computation to occur within that frame that you have to account for... like the entirety of the rest of the scene rendering (most applications will be more complex), and the rest of the game (since they're obviously targeting interactive applications). The 200 FPS isn't a goal of some sort of final application, this is just an example of the algorithm running, and the faster it achieves this the better. The actual applications that implement the algorithm would be the ones to make that tradeoff decision.
@radekmojzis9829
@radekmojzis9829 2 жыл бұрын
I would like to see just the denoising step compared to reference in the same resolution... I cannot help but feel like even the new method outputs a blurry mess without any high frequency details anywhere in sight - which is to be expected since the technique has to reconstruct the "high resolution" by conjuring 4 new pixels for every pixel of input.
@goteer10
@goteer10 2 жыл бұрын
A couple of comparisons are shown in this video, and it's exactly as you say. Looking at the spaceship example is the most obvious: The side panels have some "dots" as either geometry or texture, and the noisy+filter version doesn't reflect this detail. It's minor, but it goes to show that you still can't use this in a professional manner,
@radekmojzis9829
@radekmojzis9829 2 жыл бұрын
@@goteer10 i don't think it's minor - it's significant enough for me to prefer the look of traditional rendering techniques at that point.
@MRTOWELRACK
@MRTOWELRACK 2 жыл бұрын
@@radekmojzis9829 To be fair, this could be combined with traditional rasterization.
@facenameple4604
@facenameple4604 2 жыл бұрын
The reference simulation at 3:14 is actually NOISIER than the denoising algorithm.
@t1460bb
@t1460bb 2 жыл бұрын
What a time to be alive! Excellent work!
@OSemeador
@OSemeador 2 жыл бұрын
Initially it looked like a temporal motion blur with an upscaling pass on it but the last comparison result shows there is more to it than initially meets the eye... pun intended
@JCUDOS
@JCUDOS 2 жыл бұрын
Absolutely love your videos! Is there an actually 2 minute or even 60s "short" version of these videos (the name of the channel made me wonder)? If not, would you mind if someone else did it? I love information density and without the repeated parts and speaking slightly faster, I think those videos would make wonderful shorts!
@harnageaa
@harnageaa 2 жыл бұрын
I think he has contract with sponsors to post every video + sponsor that probably means shorts too (since they are on the video page) so prob. Not gonna happen, at least on this channel
@michaeltesterman8295
@michaeltesterman8295 2 жыл бұрын
What are the hardware requirements and how intensive is it on the cards
@levibruner617
@levibruner617 2 жыл бұрын
This is Impressive. I hope someday we can use this technique to make Minecraft look more realistic with light. Right now the best we can do in Minecraft is ray tracing. I apologize about spelling and grammar. This is very hard for me to type because I’m partially deaf blind. Hope you have a great day and keep on learning.
@RealCreepaTime
@RealCreepaTime 2 жыл бұрын
Would it be worth it for the light simulation to run even longer than 4-12 milliseconds? Allowing it to be less noisey base, resulting in a higher quality output? Or would it just be negligible/not worth it?
@speedstyle.
@speedstyle. 2 жыл бұрын
The light simulation quickly plateaus, which is why it takes hours to get the (still noisy) reference. Maybe with a faster computer you could choose to spend an extra few ms on raytracing, but for realtime if it takes 10ms to raytrace and 5ms to denoise/upscale that's already only 66fps. Once you add physics simulations and game mechanics you won't want to decrease performance much further. For non-realtime applications like films it doesn't matter, people will spend hours per frame then run this denoiser on the 'reference'.
@RealCreepaTime
@RealCreepaTime 2 жыл бұрын
@@speedstyle. Ah, thank you for the response! That makes sense, and yea for video game ideals/standards you probably wouldn't want to do that. I will say my intentions for what it would be used for is a a bit different but similar to the last portion you mentioned. I was more thinking of using it in rendering software like the person in the pinned comment mentioned. It might be helpful for previewing in viewport with denoising.
@nixel1324
@nixel1324 2 жыл бұрын
Can't wait for video game tech to incorporate this! The next generation of games is gonna look incredible, especially when you factor in improvements in hardware (and of course other software/technology)!
@jeffg4686
@jeffg4686 2 жыл бұрын
amazing. glad to see all the tech giants are in NN. Unbelievable results.
@MacCoy
@MacCoy 2 жыл бұрын
the "Any sufficiently advanced technology is indistinguishable from magic" quote fits for the wizards over at intel
@witness1013
@witness1013 2 жыл бұрын
There is no magic happening at all.
@andrewwelch5017
@andrewwelch5017 2 жыл бұрын
I'd like to see a comparison between this technique applied to a native 1440p render and the 1440p reference.
@xl000
@xl000 2 жыл бұрын
Is this related to the incoming Intel ARC A7x0 ? I don't remember Intel doing much research around graphics, except: Embree, OSPRay, Open Image Denoiser, maybe OneAPI Is it related to those ?
@Uhfgood
@Uhfgood 2 жыл бұрын
It's pretty interesting, I know it's AI assisted, however, I noticed in the reference footage there was "texture" to the metal in the subway scene (about 5:43) -- that being said, couldn't they do like when generating fake human faces, and have a huge library of sampled textures with which to "build" the textures where the detail was actually lost? As I understand it, an ai samples a lot of human faces and even some cg ones, and then builds the "fake" ones out of that information. Maybe they could do the same with textures in scenes that were lost with the filtering?
@JimBob1937
@JimBob1937 2 жыл бұрын
That's actually something I've been working on (personal project). It's tricky to have a neural network that has that much encoded information that still runs in real time. I'd say it's possible to do that now, but in nonrealtime.
@Uhfgood
@Uhfgood 2 жыл бұрын
@@JimBob1937 - Cool, maybe two more papers down the line eh? ;-)
@Uhfgood
@Uhfgood 2 жыл бұрын
@@JimBob1937 - I'm just a layman and not very technical, but sort of wish someone would use some ai to take poor looking video and clear it up. I like watching movies from the 1930's, and often they're copies of copies, either old vhs transfers, or just the same move recompressed over and over, until they're a blurry mess. Film sources are usually in the hands of private collectors, even just lost. So we have what's left over, like old tv broadcasts and what not. My contention is an ai could scour the internet looking for decent images of actors and locations and what not, in order to "rebuild" the poor or lost image. Some images could be built from multiple poor copies of the same film/video, and even frame stacking like they do for superresolution. In the end we may not need original sources for preservation and restoration of films.
@JimBob1937
@JimBob1937 2 жыл бұрын
@@Uhfgood , there are some AI video upscaling and deblocking software out there now. I use Topaz, which does a very good job... if the source material is somewhat good. The software is excellent at refining edges and even filling in macro details in some areas. However, it fails at properly reconstructing textures if the source material is too low resolution. In those cases it tends to make things look like sharp blobs, haha, where there is enough detail to outline the shapes, but the internal texture remains blurry, creating an unusual look. I have seen some GANN based AI that tries to construct texture detail, but it still has issues unless the textures aren't common, but it doesn't know what is correct or not so it fills in texture detail incorrectly in those areas. So, no silver bullet for lower quality videos. For 3D scenes, my approach has been on a per game basis, where you train a NN on textures specifically selected for a certain game. This isn't as able to be generic, but it does allow you to keep the network size down as you avoid having to encode texture understanding for something that may never be encountered in a specific game. Tricky business for sure, but fun to play around with.
@trippyvortex
@trippyvortex 2 жыл бұрын
What a time to be alive!!
@Guhhhhhhhhhhhhhhhhhhhhhhhhhhhh
@Guhhhhhhhhhhhhhhhhhhhhhhhhhhhh 2 жыл бұрын
This will be amazing for open world games with lots of light in a large area
@RealRogerFK
@RealRogerFK 2 жыл бұрын
How fast all of this is actually going amazes me. I'm guessing by 2040 we'll have completely photorealistic realtime renders on mobile platforms at a good framerate. Next step: how do we make all of this *believable* in simulated worlds? CoD MWII's Amsterdam scenes *look* incredible but you still feel like it's a videogame. That's the next challenge, proper AI and believable environments and not just clean, shiny floors and water puddles.
@matthew.wilson
@matthew.wilson 2 жыл бұрын
It still struggles with bumpy surfaces like the leather and some metal in the train. I wonder if one day we'll get an approximation method that reintroduces that detail post-denoising, like when normal and bump maps were introduced back in the day to add detail beyond the mesh geometry.
@chris-pee
@chris-pee 2 жыл бұрын
I doubt it. I feels it would go against the "simplicity" and corectness of pathtracing, compared to decades of accumulated tricks in rasterization.
@tm001
@tm001 2 жыл бұрын
It's absolutely amazing but I noticed the a lot of the fine details got washed up... I wonder if that could be avoided with some fine tuning
@fr3ddyfr3sh
@fr3ddyfr3sh 2 жыл бұрын
Thanks for your work, to present us complicated papers in an easy way
@Dirty2D
@Dirty2D 2 жыл бұрын
Looks like it mixes together some frames and smooths, could you not render the world without raytracing on, and then add the raytraced approx on that as a reference image?
@R2Bl3nd
@R2Bl3nd 2 жыл бұрын
We're rocketing towards photo realistic VR that's almost indistinguishable from life
@TheKitneys
@TheKitneys 2 жыл бұрын
As a CGI artist specialising in Advertising CGI (Products/interiours/cars) these images have a very long way to go for photorealistic results. With the impressive results shown, it won't be too many years before they do.
@JimBob1937
@JimBob1937 2 жыл бұрын
I'd caution to look at these with a grain of salt, and more so focus on and abstract out the techniques being shown. These research papers are using "programmer art" that is comparable to the placeholder art you usually see in the likes of game development and such before actual artists enter into the process. Even top of the line techniques look bland if not in the hands of an artist, even if it just comes down to scene composition and lighting setup.
@GloryBlazer
@GloryBlazer 2 жыл бұрын
I didn't know intel was doing AI research aswell, thanks for the exciting news!
@harnageaa
@harnageaa 2 жыл бұрын
At this point everyone is doing secretly, seems this is the future of computational speed
@kishaloyb.7937
@kishaloyb.7937 2 жыл бұрын
Intel has been been doing AI research for nearly 2.5 decades now. Heck, in the early to late 2000s, they demonstrated running a RT on a CPU by generating a BVH structure. Sure it was slow, but was pretty impressive.
@GloryBlazer
@GloryBlazer 2 жыл бұрын
@@kishaloyb.7937 I didn't realize ray tracing was such an old concept, first time I heard about it were the real time ones by Nvidia. 👍
@GloryBlazer
@GloryBlazer 2 жыл бұрын
@Garrett yea I'm guessing it has alot to do with software aswell
@little_lord_tam
@little_lord_tam 2 жыл бұрын
Where the 200 fps achieved with 1440p or with 720p? I couldnt quite follow the fps in relation to the pixels
@JonS
@JonS 2 жыл бұрын
I'd like to see this added to optics stray light simulation packages. We face the same noise/ray counts/runtime issues when analyzing optics for stray light.
@Build_the_Future
@Build_the_Future 2 жыл бұрын
The reference look so mush better
@kylebowles9820
@kylebowles9820 2 жыл бұрын
I don't understand why they don't build these techniques into the rendering, why wait until after when you have no information about the scene? It would have been trivial to get real high frequency details back from the geometry and textures.
@itsd0nk
@itsd0nk 2 жыл бұрын
I think it’s the quick real-time aspect that they’re aiming for.
@kylebowles9820
@kylebowles9820 2 жыл бұрын
@@itsd0nk I specifically built that step into the earliest part of the rendering pipeline because the advantages are monumental and the cost is insignificant. I think the main reason is so you can tack it on as an afterthought.
@zackmercurys
@zackmercurys 2 жыл бұрын
Is this technique good at temporal denoising? Like, usually denoising causes flash artifacts from the fireflies
@emrahyalcin
@emrahyalcin 2 жыл бұрын
please add this to the games with high visual output like satisfactory. I can't even believe such a thing can happen. It is really amazing.
@frollard
@frollard 2 жыл бұрын
With nvidia getting all the fanfare the last few years it's really interesting to see intel get some spotlight. This is purely fantastic.
@ozzi9816
@ozzi9816 2 жыл бұрын
I’m very excited for when something like reshade implements this!!! Cheap, fast RTGI or even just straight up ray tracing that can run on modest graphics cards isn’t that far away!
@haves_
@haves_ 2 жыл бұрын
But can it be used for not-seen-before scene?
@AuaerAuraer
@AuaerAuraer 2 жыл бұрын
Looks like Intel is entering the ray tracing competition.
@muffty1337
@muffty1337 2 жыл бұрын
I hope that games will profit from this new paper soon. Intel's new graphics cards could benefit from an uplift like this.
@hellfiresiayan
@hellfiresiayan 2 жыл бұрын
this is hardware accelerated on arc gpus only, right?
@azurehydra
@azurehydra 2 жыл бұрын
WHAT A TIME TO BE ALIVE!!!!!!!!!!!!
@raguaviva
@raguaviva 2 жыл бұрын
I wish you would show the after/before comparison at the beginning of the video.
@LukeVanIn
@LukeVanIn 2 жыл бұрын
I think this is great, but they should apply the filter before high-frequency normal/detail mapping.
@abandonedmuse
@abandonedmuse 2 жыл бұрын
What a time to be alive!
@oculicious
@oculicious 2 жыл бұрын
this gives me hope for photoreal VR in the near future
@quantenkristall
@quantenkristall 2 жыл бұрын
Have you already tried using a closed loop feed back of de-noised result into back ray-tracing to improve metropolis light transport guessing. Adaptive recursion from lower to higher resolution and depth (number of bounces). The less useless calculations the less waste of energy or time .. 🌳
@lolsadboi3895
@lolsadboi3895 2 жыл бұрын
we have the "Enhance image" from every sci fi film now!!
@erikals
@erikals 2 жыл бұрын
this is.... incredible !!!
@dryued6874
@dryued6874 2 жыл бұрын
Is this the kind of approach that's used in real-time ray-tracing in games?
@jackrdye
@jackrdye 2 жыл бұрын
Just shows compute power may not be compounding at the same rate but new techniques are compounding outcomes
@creemyforlul5050
@creemyforlul5050 2 жыл бұрын
Didn't NVidia bring out NRD (Real-Time Denoiser) and DLSS (Deep Learning Super Sampling) in 2020? That would do the same as Intel's Denoiser | Upscaler or am i mistaken?
@JorgetePanete
@JorgetePanete 2 жыл бұрын
In the paper it says it's a novel architecture, combining those two steps in one, reducing cost
@Halston.
@Halston. 2 жыл бұрын
2:42 if you've seen the explanation of light transport simulation before 👍
@uku4171
@uku4171 2 жыл бұрын
Intel is doing pretty well with that stuff. Excited about them entering the GPU market.
@coows
@coows 2 жыл бұрын
looks great, but it still misses some detail.
@p5rsona
@p5rsona 2 жыл бұрын
would advances in ai mean that we will require less gpu to play a complex game?
@malborboss5710
@malborboss5710 2 жыл бұрын
For sure. It already happens with for example DLSS (Nvidia) or XeSS (Intel). They are both based on neural network that upscale image rendered in lower resolution
@xieangel
@xieangel 2 жыл бұрын
It depends a lot. A lot of AI solutions are often just as heavy to compute, which is why we need specialized cores and parts (RTX cards, for an example) to make sure that the AI calculations are not wasting just as many resources as the game or render. What this means, however, is that new cards don't have to be much stronger, they just need to have these specializations and they can be as good as older cards while getting better performance. A good example of this is how Intel's own AI upscaling technique (Intel XeSS) that works like DLSS doesn't give you _much_ better performance unless you have Intel's new GPUs. Your card just does so much work to upscale the image that the performance barely improves.
@markmuller7962
@markmuller7962 2 жыл бұрын
We are experiencing the 3rd big jump in videogames graphics afte DOOM and the switch from 2D to 3D (mainly throug consoles), I was there the previous 2 jumps and now with ray tracing and UE5 luckily I'm here for this jump too, this is definitely one of the most exciting industry of the entire world, like top 5
@Bran_Flakesx7
@Bran_Flakesx7 2 жыл бұрын
How much would the results vary based on the specifics of the scene being rendered? It's hard for me to tell what's going on besides noise being reduced in these scenes with low poly models, low res textures, unrealistic shaders, and sh*tpost-tier composition... I'm not even sure if there results are that impressive or reliably "real-time" with that in mind
@SAMTHINKS2
@SAMTHINKS2 2 жыл бұрын
So do they have something for dust and scratches on old movies?
@JimBob1937
@JimBob1937 2 жыл бұрын
There are existing algorithms (non-machine learning based) that do an excellent job already for that task.
@SAMTHINKS2
@SAMTHINKS2 2 жыл бұрын
@@JimBob1937 I see supposedly restored footage (60fps, colourized) on KZbin a lot that still looks dirty. Metropolis is a recent one.
@JimBob1937
@JimBob1937 2 жыл бұрын
@@SAMTHINKS2 , I'd have to see the original footage and mess with it myself. A lot of people are purists and don't like additional processing, so it may be an intentional choice to keep them. Although if they are already doing frame interpolation you'd think they'd go for it.
@soupborsh8707
@soupborsh8707 2 жыл бұрын
I heard something about 100x path tracing performance on GNU/Linux on Intel graphics cards.
@JimBob1937
@JimBob1937 2 жыл бұрын
I think that was the speed up a driver update had on Linux for Intel's dedicated GPUs. The drivers were so bad at initial release that they managed a near 100x improvement. While that is an impressive speed up... it more so just says they released it in a pretty poor state to begin with.
@gaijintendo
@gaijintendo 2 жыл бұрын
I want to play Goldeneye upscaled like this.
@tcoder4610
@tcoder4610 2 жыл бұрын
I'm also very happy for another reason. Since it's Intel, not Nvidia, that means this simulation probably ran on a CPU, rather than a GPU. I have a Nvidia GTX GPU (that doesn't support that vulkan raytracing extension), so I'm sad that almost no game supports raytracing on my card. But I have an Intel CPU, so if this was CPU only or CPU + Integrated graphics, that would mean that this will be supported on any Intel processor and won't require different hardware!
@Gubby-Man
@Gubby-Man 2 жыл бұрын
Welcome to 2022 where Intel is a GPU manufacturer and is invested in competing with AMD and Nvidia.
@MizarcDev
@MizarcDev 2 жыл бұрын
This paper is about a denoising algorithm designed to run on top of raytracing, so you will still require a GPU that is capable of raytracing in order to benefit from this. If anything, this would be something that would be implemented on Intel Arc GPUs. There's no way a CPU would be able to run realtime raytracing with current technologies.
@JorgetePanete
@JorgetePanete 2 жыл бұрын
I read in another comment that it can run on the CPU and will get GPU support
@alexmcleod4330
@alexmcleod4330 2 жыл бұрын
According to the paper, it was all running on an NVidia RTX 3070. I know it seems weird for Intel to show off something that doesn't even use their hardware - but this is more of an academic paper, and not really a product they're advertising.
@rb8049
@rb8049 2 жыл бұрын
Seems to me that even better results are possible. Some surfaces loose resolution. These surfaces have repetitive textures which should be able to replicate on the final image. Maybe need both a high resolution non ray traced stream and the ray traced stream. The fine details of textures can come from the non Ray traced version. Maybe use several different fast light transport methods at higher resolution and Ray tracing at the lower resolution and merge all into one NN. Definitely can do better than this in a future paper.
@MRTOWELRACK
@MRTOWELRACK 2 жыл бұрын
This is exactly what modern games often do - hybrid rendering combining traditional rasterization and ray tracing.
@OperationDarkside
@OperationDarkside 2 жыл бұрын
200 fps on what hardware? Did I miss something?
@johand3114
@johand3114 2 жыл бұрын
is it different from the open image denoiser? because I heard they have a patch that runs on GPU instead of the original that runs on CPU
@JorgetePanete
@JorgetePanete 2 жыл бұрын
I read in another comment that this one is that denoiser but upgraded
@johand3114
@johand3114 2 жыл бұрын
@@JorgetePanete I hope it gets implemented in 3d softwares soon cuz I don't use optix which is faster but trash not to mention that the Intel's denoiser is so clean that it doesn't need a temporal stable solution
@JorgetePanete
@JorgetePanete 2 жыл бұрын
@@johand3114 It's on blender right now!
@Solizeus
@Solizeus 2 жыл бұрын
Do we need a RTX 3080 with 64Gb RAM and a Ryzen 7800 to barely be able to use it? Or it can run on a potato because that is a big differential
@woodenfigurines
@woodenfigurines 2 жыл бұрын
will this be a feature of Intel's new graphics cards? interesting times indeed!
@gawni1612
@gawni1612 2 жыл бұрын
When Dr. K is Happy, so am I.
@davidstar2362
@davidstar2362 2 жыл бұрын
what a time to be alive!
@ZiggityZeke
@ZiggityZeke 2 жыл бұрын
We are gonna have some pretty looking games soon, huh
@infographie
@infographie 2 жыл бұрын
Excellent
@colevilleproductions
@colevilleproductions 2 жыл бұрын
light transport researcher by trade, dark transport researcher by light
Wow, A Simulation That Looks Like Reality! 🤯
11:01
Two Minute Papers
Рет қаралды 272 М.
NVIDIA’s DLSS 3.5: This Should Be Impossible!
8:29
Two Minute Papers
Рет қаралды 388 М.
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН
小丑教训坏蛋 #小丑 #天使 #shorts
00:49
好人小丑
Рет қаралды 54 МЛН
Google’s New AI Learned To See In The Dark! 🤖
8:57
Two Minute Papers
Рет қаралды 372 М.
Finally, Deformation Simulation... in Real Time! 🚗
6:56
Two Minute Papers
Рет қаралды 494 М.
Jensen Huang's Bold Vision for the Future of AI
8:03
The AI Watchmen
Рет қаралды 519
coding a really fast De-Noising algorithm
7:30
8AAFFF
Рет қаралды 45 М.
RAY TRACING and other RENDERING METHODS
10:22
Andrey Lebrov
Рет қаралды 265 М.
Unreal Engine 5 - Ray Tracing Supercharged!
9:20
Two Minute Papers
Рет қаралды 97 М.
DeepMind's New AI Looked At 1,000,000,000 Images!
8:41
Two Minute Papers
Рет қаралды 151 М.
Ray Tracing Performance On An Intel GPU Is Weird...
14:03
Dawid Does Tech Stuff
Рет қаралды 744 М.
NVIDIA Just Supercharged Ray Tracing!
6:59
Two Minute Papers
Рет қаралды 155 М.
Ray Tracing Essentials Part 6: The Rendering Equation
9:24
NVIDIA Developer
Рет қаралды 44 М.
Tuna 🍣 ​⁠@patrickzeinali ​⁠@ChefRush
00:48
albert_cancook
Рет қаралды 148 МЛН