Intel’s New AI: Amazing Ray Tracing Results! ☀️

  Рет қаралды 125,318

Two Minute Papers

Two Minute Papers

Жыл бұрын

❤️ Check out Weights & Biases and say hi in their community forum here: wandb.me/paperforum
📝 The paper "Temporally Stable Real-Time Joint Neural Denoising and Supersampling" is available here:
www.intel.com/content/www/us/...
📝 Our earlier paper with the spheres scene that took 3 weeks:
users.cg.tuwien.ac.at/zsolnai...
❤️ Watch these videos in early access on our Patreon page or join us here on KZbin:
- / twominutepapers
- / @twominutepapers
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / twominutepapers
Web: cg.tuwien.ac.at/~zsolnai/

Пікірлер: 333
@bj124u14
@bj124u14 Жыл бұрын
This denoising technique is absolutely fantastic! 200 frames per second? That unbelievable. I really hope Blender gets it's hands on this technique. Denoising is very important, but doesn't get as much attention as it should. I'm so glad to see how it's evolved. Thank you 2MP, and of course, what a time to be alive!
@TwoMinutePapers
@TwoMinutePapers Жыл бұрын
You are very kind, thank you so much! 🙏
@petterlarsson7257
@petterlarsson7257 Жыл бұрын
It's pretty sad that most amazing technology isn't open source, imagine how fun people could have with them
@ty_teynium
@ty_teynium Жыл бұрын
Thank you for mentioning Blender. I was thinking the same thing since my Blender renders and animations always have some noise in them. I'd love to do some more animating if not for this issue. I do hope it gets released soon.
@little_lord_tam
@little_lord_tam Жыл бұрын
@@petterlarsson7257 Non. No one would have money for developing the tools so they wouldnt exist
@ClintochX
@ClintochX Жыл бұрын
@@petterlarsson7257 bro, what you're looking at is OIDN (Intel Open Image DeNoise) and its completely Free and open source. Blender already uses it, but now it's time for an upgrade
@arantes6
@arantes6 Жыл бұрын
I'd appreciate just a small explanation of what's the new idea of the new techniques presented in the videos : how it works under the hood, in a few sentences. Is it a neural network, where the previous techniques were hand-crafted? Is it a new kind of neural net architecture? Is it just a math operation that nobody thought of applying to this problem before? Just a glimpse behind the technical curtain, instead of just the results, would make these amazing videos even better!
@amoliski
@amoliski Жыл бұрын
Sounds like we need two minute papers for a quick overview and five minute papers for a slightly more technical explanation. Maybe have to paper author do a quick interview showing their work if they are available?
@liambury529
@liambury529 Жыл бұрын
The paper is called "Temporally Stable Real-Time Joint Neural Denoising and Supersampling", and if you're that curious, the link to the paper is in the video description.
@itsd0nk
@itsd0nk Жыл бұрын
I second this motion.
@facts9144
@facts9144 Жыл бұрын
@@liambury529 wouldn’t hurt just to give a extra little bit of info in the video tho
@OmniscientOCE
@OmniscientOCE Жыл бұрын
@@facts9144 I concur.
@danielb.4205
@danielb.4205 Жыл бұрын
A direct comparison between Intels vs NVIDIAs approach (OptiX) would be interesting.
@woodenfigurines
@woodenfigurines Жыл бұрын
I think and hope you'll get your comparison a few days after this hits the market in intel cards :D
@StanleyKubick1
@StanleyKubick1 Жыл бұрын
next gen ARC gpu's starting to look pretty desirable
@black_platypus
@black_platypus Жыл бұрын
We barely ever hear about the inner workings anymore :( Please bring back at least a structural overview or abstract! Two Minute Papers > Two Minute "Neat, isn't it"! 😊
@Dayanto
@Dayanto Жыл бұрын
It's not really fair to compare with the outdated SVGF algorithm when the followup A-SVGF paper solved the main shortcoming of the original paper (significant ghosting/smearing during motion or lighting changes) already 4 years ago. For example, A-SVGF is what was used in Quake 2 RTX back when real time ray tracing was still new.
@raylopez99
@raylopez99 Жыл бұрын
There's no fair in science. Peer review is very unfair at times and ruthless. Just the way it works. Speaking as a two minute scholar who does not read the original sources.
@Exilum
@Exilum Жыл бұрын
It was labelled as SVGF, but also as being from 2020. More likely to be a variation of it. SVGF is a 2017 paper, A-SVGF is a 2018 paper.
@Dayanto
@Dayanto Жыл бұрын
@@Exilum The one from 2020 was a different paper labeled "NBG". (Neural Bilateral Grid)
@Exilum
@Exilum Жыл бұрын
@@Dayanto ok, my bad on that
@alegzzis
@alegzzis Жыл бұрын
I will add more, it's not fair because Nvidia has an even newer denoiser "nVidia NRD"
@draco6349
@draco6349 Жыл бұрын
I want to see this used in tandem with ReStir. An amazing path-tracing algorithm that leaves barely any noise combined with an amazing denoiser should be able to get breathtaking results, and even better, in real-time.
@draco6349
@draco6349 Жыл бұрын
@@dylankuzmick3122 That's actually so cool. I might literally just install that, real-time path-tracing in Blender is something I've always wanted.
@sebastianjost
@sebastianjost Жыл бұрын
absolutely fantastic! In some of the examples I can definitely still see some room for improvement and a follow-up video, but this is still remarkable improvement. However, I find some of the comparisons a bit difficult to interpret. I wish the same videos with different techniques were shown side-by side rather than one after the other. Ideally even with just one video and a moving split showing the two techniques rendering parts of the video.
@Sekir80
@Sekir80 Жыл бұрын
I'd like to see the result of something slower, say, give the renderer 1 second, way less noisy, maybe that way the upsampled result is closer to the reference. For clarity: I'm not really interested in real time image generation if we are talking about minute/hour/day long rendering challenges. I'm interested in great quality results.
@Sekir80
@Sekir80 Жыл бұрын
@michael Interesting insight! I was more cynical with 3D visualization and figured I'll do some crappy commercial which I disliked, so never entered this space. Mediocrity. I even see it in AAA games: for example I tend (past tense) to model stuff for specific rendering quality, if I see a racing simulator where the steering wheel is a visual n-gon I just scoff. I'd rather spend the polygon budget on them most obvious things. Maybe I'm weird.
@xamomax
@xamomax Жыл бұрын
It seems that neural network based denoising could benefit from an input reference image rendered very fast without raytracing, but with high detail. Then, this high detail image plus the raytracing image can be used by the denoiser to get the lighting correct without washing out the detail.
@brianjacobs2748
@brianjacobs2748 Жыл бұрын
that sounds cool but it wouldn’t be as authentic
@xamomax
@xamomax Жыл бұрын
@@brianjacobs2748 depends on how it is trained.
@circuit10
@circuit10 Жыл бұрын
@@brianjacobs2748 More authentic than no raytracing at all, which is the alternative
@circuit10
@circuit10 Жыл бұрын
I think this would also be really good for DLSS 3 because the frame could be generated without much latency based on actual data rather than interpolating between two frames
@MRTOWELRACK
@MRTOWELRACK Жыл бұрын
The neural network is trained according to reference input. The renderer already knows the geometry and blurs the noise accordingly.
@myNamezMe
@myNamezMe Жыл бұрын
Impressive progress, will be interesting to see the next paper.
@roccov3614
@roccov3614 Жыл бұрын
This is a brilliant idea for real time games. Mixing a quick light transport video with a quick light transport optimised noise filter, for a quality real time output is brilliant.
@juliandarley
@juliandarley Жыл бұрын
i would love to know if or when this could be applied to Blender/CyclesX so that 1) we can have much reduced render times of photoreal scenes and 2) reasonably good caustics.
@shmendez_
@shmendez_ Жыл бұрын
Wait wait wait bro how is your comment 7 hr old but the video is only 3 min old??
@Ben_CY123
@Ben_CY123 Жыл бұрын
@@shmendez_ em….maybe timezone issue?
@nullneutrality8047
@nullneutrality8047 Жыл бұрын
@@shmendez_ videos are shown early on patreon
@MattPin
@MattPin Жыл бұрын
Honesty once this gets included into blender it's going to be very good, as I can see this ai Denoiser will definitely help render frames faster, it would be very cool to see a comparison of this technique with their other Denoiser, openimagedenoise.
@computerconcepts3352
@computerconcepts3352 Жыл бұрын
yeah lol
@13squared2009
@13squared2009 Жыл бұрын
I’d love a video taking us under the covers to see what it is these authors are actually doing to improve these techniques… I just don’t see myself reading the actual papers as a casual viewer. Love your videos!
@mosog8829
@mosog8829 Жыл бұрын
Am glad this is going to be implemented in blender, as they are already working on it.
@juliandarley
@juliandarley Жыл бұрын
can you provide a link, pls?
@juliandarley
@juliandarley Жыл бұрын
@@mosog8829 many thanks. have not looked at the 3.4 alpha yet. release notes say that path guiding works only with CPU, but GPU support coming in future. may be worth doing some comparison tests with CPU only, but obviously real gain will be with GPU.
@mosog8829
@mosog8829 Жыл бұрын
@@juliandarley welcome. Indeed. It will be even better if it's possible to combine CPU and GPU.
@kleyyer
@kleyyer Жыл бұрын
You might be confusing this with the new Pathguiding feature for fireflies that will be implemented into Cycles in the future. I have seen absolutely nothing about this new denoising being implemented into Blender
@juliandarley
@juliandarley Жыл бұрын
​@@kleyyer it wasn't my suggestion, but i can ask blender hq about it. i did look at the new alpha and it did not look the same as what is show here. i still keep hoping for good caustics from cycles. for me, needing photoreal renders, it is the number one thing that lets cycles down.
@JeremieBPCreation
@JeremieBPCreation Жыл бұрын
Absolutely love your videos! Is there an actually 2 minute or even 60s "short" version of these videos (the name of the channel made me wonder)? If not, would you mind if someone else did it? I love information density and without the repeated parts and speaking slightly faster, I think those videos would make wonderful shorts!
@harnageaa
@harnageaa Жыл бұрын
I think he has contract with sponsors to post every video + sponsor that probably means shorts too (since they are on the video page) so prob. Not gonna happen, at least on this channel
@Sadiinso
@Sadiinso Жыл бұрын
You forgot to mention that the 200fps (5.19ms per frame as shown in the paper) was observed when running on an rtx 3070. Performance and runtime are not absolute measures and are related to the hardware on which the workload is running.
@xelusprime
@xelusprime Жыл бұрын
I'm curious though, rendering a noisy image from a 3d app at like say, 250 samples, and then giving it to the A.I. to do its magic, how much memory does it use to denoise and potentially upscale the frame(s)?
@fr3ddyfr3sh
@fr3ddyfr3sh Жыл бұрын
Thanks for your work, to present us complicated papers in an easy way
@little_lord_tam
@little_lord_tam Жыл бұрын
Where the 200 fps achieved with 1440p or with 720p? I couldnt quite follow the fps in relation to the pixels
@notapplicable7292
@notapplicable7292 Жыл бұрын
Its rather incredible how many times I've listened to him explain noise in ray traced lighting
@aidanm5578
@aidanm5578 Жыл бұрын
Finally, I don't have to go 'outside' to see realistic lighting.
@zzoldd
@zzoldd Жыл бұрын
A redditors true dream
@AronRubin
@AronRubin Жыл бұрын
Do we no longer collect per-fragment info on whether how fully a ray(s) was incident? It seems like you could just in-paint low ranking fragments instead of smoothing a whole scene.
@michaeltesterman8295
@michaeltesterman8295 Жыл бұрын
What are the hardware requirements and how intensive is it on the cards
@Dirty2D
@Dirty2D Жыл бұрын
Looks like it mixes together some frames and smooths, could you not render the world without raytracing on, and then add the raytraced approx on that as a reference image?
@custos3249
@custos3249 Жыл бұрын
Hold up. At 5:08 and more at 5:14, is that noise I see in the reference that the program removed?
@bryanharrison3889
@bryanharrison3889 Жыл бұрын
I love these videos... even if I WASN'T a 3d animator, I'd still enjoy them because of Karoly's passion for the subject of A.I., computer graphics, and machine learning.
@AIpha7387
@AIpha7387 Жыл бұрын
5:42 It seems to me that the reflective material on the surface is being ignored. It was removed during de-noising.
@AIpha7387
@AIpha7387 Жыл бұрын
It is completely different from the gloss intended by the developer. This can't be used in the game.
@harnageaa
@harnageaa Жыл бұрын
I think they might make a paper to add reflection to all type of materials, so you have 2 algorithms in one, that probably might solve the issue
@fanjapanischermusik
@fanjapanischermusik Жыл бұрын
when can I expect this to be used in my smartphone? while taking photos for example. looks really good.
@JonS
@JonS Жыл бұрын
I'd like to see this added to optics stray light simulation packages. We face the same noise/ray counts/runtime issues when analyzing optics for stray light.
@xl000
@xl000 Жыл бұрын
Is this related to the incoming Intel ARC A7x0 ? I don't remember Intel doing much research around graphics, except: Embree, OSPRay, Open Image Denoiser, maybe OneAPI Is it related to those ?
@MarshalerOfficial
@MarshalerOfficial Жыл бұрын
Those 2 years years ago were like 20 years imo. Sharpening images are the new sexy again. But this dosen't mean that raytracing wasn't boring either, it's damn impressive what we had come from What a time to be alive boys and girls!
@Eternal_23
@Eternal_23 Жыл бұрын
This+RESTIR=Realtime CG
@adrianm7203
@adrianm7203 Жыл бұрын
It would have been nice if this video talked about how this was accomplished. The results are interesting but I'm more interested in how it works...
@michaelleue7594
@michaelleue7594 Жыл бұрын
It's called 2-minute papers, not 2-semester papers.
@alex15095
@alex15095 Жыл бұрын
@@michaelleue7594 Cut out a minute of the same footage that was looped 22 times in the video, and roughly explain what architectures they're using, how it was trained, show some diagrams or graphs from the paper that people can pause to have a deeper look, etc
@sikliztailbunch
@sikliztailbunch Жыл бұрын
If the new method allows for 200 fps in that example scene, wouldn´t it make sense to limit it to 60 fps to gather even more samples in the first phase before denoising it?
@JimBob1937
@JimBob1937 Жыл бұрын
There is still other computation to occur within that frame that you have to account for... like the entirety of the rest of the scene rendering (most applications will be more complex), and the rest of the game (since they're obviously targeting interactive applications). The 200 FPS isn't a goal of some sort of final application, this is just an example of the algorithm running, and the faster it achieves this the better. The actual applications that implement the algorithm would be the ones to make that tradeoff decision.
@jinushaun
@jinushaun Жыл бұрын
The fact that it’s able to reconstruct the no smoking sign from that noisy mess is mind blowing.
@andrewwelch5017
@andrewwelch5017 Жыл бұрын
I'd like to see a comparison between this technique applied to a native 1440p render and the 1440p reference.
@OSemeador
@OSemeador Жыл бұрын
Initially it looked like a temporal motion blur with an upscaling pass on it but the last comparison result shows there is more to it than initially meets the eye... pun intended
@johand3114
@johand3114 Жыл бұрын
is it different from the open image denoiser? because I heard they have a patch that runs on GPU instead of the original that runs on CPU
@JorgetePanete
@JorgetePanete Жыл бұрын
I read in another comment that this one is that denoiser but upgraded
@johand3114
@johand3114 Жыл бұрын
@@JorgetePanete I hope it gets implemented in 3d softwares soon cuz I don't use optix which is faster but trash not to mention that the Intel's denoiser is so clean that it doesn't need a temporal stable solution
@JorgetePanete
@JorgetePanete Жыл бұрын
@@johand3114 It's on blender right now!
@RealCreepaTime
@RealCreepaTime Жыл бұрын
Would it be worth it for the light simulation to run even longer than 4-12 milliseconds? Allowing it to be less noisey base, resulting in a higher quality output? Or would it just be negligible/not worth it?
@speedstyle.
@speedstyle. Жыл бұрын
The light simulation quickly plateaus, which is why it takes hours to get the (still noisy) reference. Maybe with a faster computer you could choose to spend an extra few ms on raytracing, but for realtime if it takes 10ms to raytrace and 5ms to denoise/upscale that's already only 66fps. Once you add physics simulations and game mechanics you won't want to decrease performance much further. For non-realtime applications like films it doesn't matter, people will spend hours per frame then run this denoiser on the 'reference'.
@RealCreepaTime
@RealCreepaTime Жыл бұрын
@@speedstyle. Ah, thank you for the response! That makes sense, and yea for video game ideals/standards you probably wouldn't want to do that. I will say my intentions for what it would be used for is a a bit different but similar to the last portion you mentioned. I was more thinking of using it in rendering software like the person in the pinned comment mentioned. It might be helpful for previewing in viewport with denoising.
@hellfiresiayan
@hellfiresiayan Жыл бұрын
this is hardware accelerated on arc gpus only, right?
@jeffg4686
@jeffg4686 Жыл бұрын
amazing. glad to see all the tech giants are in NN. Unbelievable results.
@ozzi9816
@ozzi9816 Жыл бұрын
I’m very excited for when something like reshade implements this!!! Cheap, fast RTGI or even just straight up ray tracing that can run on modest graphics cards isn’t that far away!
@t1460bb
@t1460bb Жыл бұрын
What a time to be alive! Excellent work!
@facenameple4604
@facenameple4604 Жыл бұрын
The reference simulation at 3:14 is actually NOISIER than the denoising algorithm.
@zackmercurys
@zackmercurys Жыл бұрын
Is this technique good at temporal denoising? Like, usually denoising causes flash artifacts from the fireflies
@tm001
@tm001 Жыл бұрын
It's absolutely amazing but I noticed the a lot of the fine details got washed up... I wonder if that could be avoided with some fine tuning
@nixel1324
@nixel1324 Жыл бұрын
Can't wait for video game tech to incorporate this! The next generation of games is gonna look incredible, especially when you factor in improvements in hardware (and of course other software/technology)!
@Solizeus
@Solizeus Жыл бұрын
Do we need a RTX 3080 with 64Gb RAM and a Ryzen 7800 to barely be able to use it? Or it can run on a potato because that is a big differential
@RealRogerFK
@RealRogerFK Жыл бұрын
How fast all of this is actually going amazes me. I'm guessing by 2040 we'll have completely photorealistic realtime renders on mobile platforms at a good framerate. Next step: how do we make all of this *believable* in simulated worlds? CoD MWII's Amsterdam scenes *look* incredible but you still feel like it's a videogame. That's the next challenge, proper AI and believable environments and not just clean, shiny floors and water puddles.
@haves_
@haves_ Жыл бұрын
But can it be used for not-seen-before scene?
@erikals
@erikals Жыл бұрын
this is.... incredible !!!
@erikals
@erikals Жыл бұрын
4:45 info; this paper is from 2020. (not 2014)
@Guhhhhhhhhhhhhhhhhhhhhhhhhhhhh
@Guhhhhhhhhhhhhhhhhhhhhhhhhhhhh Жыл бұрын
This will be amazing for open world games with lots of light in a large area
@blackbriarmead1966
@blackbriarmead1966 Жыл бұрын
I'm taking a computer graphics course this semester. After making a CPU (no lighting or raytracing) renderer that takes a few seconds to render one frame of a very simple scene, the sheer immensity of the number of calculations required for realistic scenes set in. There is a whole stack of technology to make stuff like this possible, from the 5 nm silicon, GPU architecture, software optimization, and creative techniques to reduce the number of calculations necessary. And the layperson takes all of this for granted when they play cyberpunk 2077 on their 4090
@JimBob1937
@JimBob1937 Жыл бұрын
Not to mention the entire electronics platform that it is running on. The signal transmission to the monitor... the monitor technology. Yeah, people's heads would explode if they attempted to comprehend the technology and functioning it takes for a single frame of a game to be shown to them.
@ryanmccampbell7
@ryanmccampbell7 Жыл бұрын
I wonder if these techniques can be combined with traditional rasterization-based rendering, which should produce nice sharp edges and avoid any blurriness produced by the denoising. I imagine if you run both a coarse light simulation and a rasterization step, then combine the results with a neural network, you can get the best of both worlds.
@alexmcleod4330
@alexmcleod4330 Жыл бұрын
It's already rasterised as a first step, to produce a G-buffer of surface normals, texture & colour information. That's done with Unreal Engine 4, in this case. That rasterised layer then informs the raytracing, supersampling and denoising steps. The key idea here is that they're combining the supersampling and denoising into one process. It seems like they didn't really want to pre-process the reference scenery too much, but you could probably get a better balance of sharpness & smoothness by giving the hero meshes some invisible texture maps that just say "stay sharp here, it's a human face", and "don't worry about this, it's just the floor".
@ryanmccampbell7
@ryanmccampbell7 Жыл бұрын
@@alexmcleod4330 I see, interesting.
@creemyforlul5050
@creemyforlul5050 Жыл бұрын
Didn't NVidia bring out NRD (Real-Time Denoiser) and DLSS (Deep Learning Super Sampling) in 2020? That would do the same as Intel's Denoiser | Upscaler or am i mistaken?
@JorgetePanete
@JorgetePanete Жыл бұрын
In the paper it says it's a novel architecture, combining those two steps in one, reducing cost
@infographie
@infographie Жыл бұрын
Excellent
@Build_the_Future
@Build_the_Future Жыл бұрын
The reference look so mush better
@kaioshinon5954
@kaioshinon5954 Жыл бұрын
Bravo!
@levibruner617
@levibruner617 Жыл бұрын
This is Impressive. I hope someday we can use this technique to make Minecraft look more realistic with light. Right now the best we can do in Minecraft is ray tracing. I apologize about spelling and grammar. This is very hard for me to type because I’m partially deaf blind. Hope you have a great day and keep on learning.
@trippyvortex
@trippyvortex Жыл бұрын
What a time to be alive!!
@LukeVanIn
@LukeVanIn Жыл бұрын
I think this is great, but they should apply the filter before high-frequency normal/detail mapping.
@markmuller7962
@markmuller7962 Жыл бұрын
We are experiencing the 3rd big jump in videogames graphics afte DOOM and the switch from 2D to 3D (mainly throug consoles), I was there the previous 2 jumps and now with ray tracing and UE5 luckily I'm here for this jump too, this is definitely one of the most exciting industry of the entire world, like top 5
@R2Bl3nd
@R2Bl3nd Жыл бұрын
We're rocketing towards photo realistic VR that's almost indistinguishable from life
@matthew.wilson
@matthew.wilson Жыл бұрын
It still struggles with bumpy surfaces like the leather and some metal in the train. I wonder if one day we'll get an approximation method that reintroduces that detail post-denoising, like when normal and bump maps were introduced back in the day to add detail beyond the mesh geometry.
@chris-pee
@chris-pee Жыл бұрын
I doubt it. I feels it would go against the "simplicity" and corectness of pathtracing, compared to decades of accumulated tricks in rasterization.
@drednac
@drednac Жыл бұрын
Wow, this is mind-blowing. The progress that the AI algorithms make in recent times is literally unbelievable. Every day there is less and less reason to focus on anything else in technology but machine learning.
@michaelleue7594
@michaelleue7594 Жыл бұрын
Well, ML is great but its most interesting applications are when it gets used to focus on other things in technology. The field of machine learning itself is advancing rapidly, but there's a wide world of applications for it in other tech fields that are even more fertile ground for advancements.
@drednac
@drednac Жыл бұрын
@@michaelleue7594 Years back I was working on a light transport algorithm that was running real-time on the GPU and allowed dynamic lighting and infinite bounces. Now when I look at these advancements I see where things are going. The upscaling tech, and AI "dreaming up" missing details better than a human is a total game changer. Also the ability to generate content. We truly live in exponential times, it's hard to catch up. Whatever you can work on today that seems interesting will be so obsolete next friday ..
@JorgetePanete
@JorgetePanete Жыл бұрын
​​@@drednac It's unbelievable that that's not even an exaggeration, and we don't even know what we'll have once quantum and photon computing open new possibilities
@drednac
@drednac Жыл бұрын
@@JorgetePanete We know what comes next .. Skynet :DDDD
@JorgetePanete
@JorgetePanete Жыл бұрын
@@drednac Once fiction, soon reality, it just takes one man to connect a robot to the internet, put an AI that transforms natural language into instructions, and add a task system to let it discover how to do anything and go do it instantly
@OperationDarkside
@OperationDarkside Жыл бұрын
200 fps on what hardware? Did I miss something?
@quantenkristall
@quantenkristall Жыл бұрын
Have you already tried using a closed loop feed back of de-noised result into back ray-tracing to improve metropolis light transport guessing. Adaptive recursion from lower to higher resolution and depth (number of bounces). The less useless calculations the less waste of energy or time .. 🌳
@TheKitneys
@TheKitneys Жыл бұрын
As a CGI artist specialising in Advertising CGI (Products/interiours/cars) these images have a very long way to go for photorealistic results. With the impressive results shown, it won't be too many years before they do.
@JimBob1937
@JimBob1937 Жыл бұрын
I'd caution to look at these with a grain of salt, and more so focus on and abstract out the techniques being shown. These research papers are using "programmer art" that is comparable to the placeholder art you usually see in the likes of game development and such before actual artists enter into the process. Even top of the line techniques look bland if not in the hands of an artist, even if it just comes down to scene composition and lighting setup.
@joshuawhitworth6456
@joshuawhitworth6456 Жыл бұрын
I can't see this getting much better. What will be getting better is unbiased render engines that will one day simulate light in real time without filtering techniques.
@MRTOWELRACK
@MRTOWELRACK Жыл бұрын
That would be brute force path tracing, which is orders of magnitude more demanding and not viable with existing technology.
@emrahyalcin
@emrahyalcin Жыл бұрын
please add this to the games with high visual output like satisfactory. I can't even believe such a thing can happen. It is really amazing.
@Halston.
@Halston. Жыл бұрын
2:42 if you've seen the explanation of light transport simulation before 👍
@jacobdallas7396
@jacobdallas7396 Жыл бұрын
Amazing
@zzoldd
@zzoldd Жыл бұрын
"Gratings, great!" haha
@GloryBlazer
@GloryBlazer Жыл бұрын
I didn't know intel was doing AI research aswell, thanks for the exciting news!
@harnageaa
@harnageaa Жыл бұрын
At this point everyone is doing secretly, seems this is the future of computational speed
@kishaloyb.7937
@kishaloyb.7937 Жыл бұрын
Intel has been been doing AI research for nearly 2.5 decades now. Heck, in the early to late 2000s, they demonstrated running a RT on a CPU by generating a BVH structure. Sure it was slow, but was pretty impressive.
@GloryBlazer
@GloryBlazer Жыл бұрын
@@kishaloyb.7937 I didn't realize ray tracing was such an old concept, first time I heard about it were the real time ones by Nvidia. 👍
@GloryBlazer
@GloryBlazer Жыл бұрын
@Garrett yea I'm guessing it has alot to do with software aswell
@radekmojzis9829
@radekmojzis9829 Жыл бұрын
I would like to see just the denoising step compared to reference in the same resolution... I cannot help but feel like even the new method outputs a blurry mess without any high frequency details anywhere in sight - which is to be expected since the technique has to reconstruct the "high resolution" by conjuring 4 new pixels for every pixel of input.
@goteer10
@goteer10 Жыл бұрын
A couple of comparisons are shown in this video, and it's exactly as you say. Looking at the spaceship example is the most obvious: The side panels have some "dots" as either geometry or texture, and the noisy+filter version doesn't reflect this detail. It's minor, but it goes to show that you still can't use this in a professional manner,
@radekmojzis9829
@radekmojzis9829 Жыл бұрын
@@goteer10 i don't think it's minor - it's significant enough for me to prefer the look of traditional rendering techniques at that point.
@MRTOWELRACK
@MRTOWELRACK Жыл бұрын
@@radekmojzis9829 To be fair, this could be combined with traditional rasterization.
@abandonedmuse
@abandonedmuse Жыл бұрын
What a time to be alive!
@muffty1337
@muffty1337 Жыл бұрын
I hope that games will profit from this new paper soon. Intel's new graphics cards could benefit from an uplift like this.
@lemonhscott7667
@lemonhscott7667 Жыл бұрын
This is amazing!
@kylebowles9820
@kylebowles9820 Жыл бұрын
I don't understand why they don't build these techniques into the rendering, why wait until after when you have no information about the scene? It would have been trivial to get real high frequency details back from the geometry and textures.
@itsd0nk
@itsd0nk Жыл бұрын
I think it’s the quick real-time aspect that they’re aiming for.
@kylebowles9820
@kylebowles9820 Жыл бұрын
@@itsd0nk I specifically built that step into the earliest part of the rendering pipeline because the advantages are monumental and the cost is insignificant. I think the main reason is so you can tack it on as an afterthought.
@andrius_magic
@andrius_magic Жыл бұрын
Do I get it right: NVidia and Intel both are competitors now for ray tracing? Are they in any way compatible? I.e. can Intel help Nvidia graphics to perform faster ray-tracing-wise or vise versa?
@JorgetePanete
@JorgetePanete Жыл бұрын
this algorithm, intel's open image denoiser may help nvidia and others since it's open source
@alexmcleod4330
@alexmcleod4330 Жыл бұрын
In this case, the researchers are only showing off their combined supersampler/denoiser. They haven't done anything to the raytracing itself. The engine underlying it all was Unreal Engine 4.25 (which has hardware raytracing features), running on an RTX 3070, and the new tech being demonstrated appears to be some Tensorflow and some Vulkan. So it should be possible to implement this in other engines, and on any GPU with hardware raytracing and some AI capability. That'd be any new hardware from Intel, NVidia, AMD, or even some of the bigger ARM vendors.
@chad0x
@chad0x Жыл бұрын
Absolutely nuts!
@coows
@coows Жыл бұрын
looks great, but it still misses some detail.
@AuaerAuraer
@AuaerAuraer Жыл бұрын
Looks like Intel is entering the ray tracing competition.
@uku4171
@uku4171 Жыл бұрын
Intel is doing pretty well with that stuff. Excited about them entering the GPU market.
@jackrdye
@jackrdye Жыл бұрын
Just shows compute power may not be compounding at the same rate but new techniques are compounding outcomes
@soupborsh8707
@soupborsh8707 Жыл бұрын
I heard something about 100x path tracing performance on GNU/Linux on Intel graphics cards.
@JimBob1937
@JimBob1937 Жыл бұрын
I think that was the speed up a driver update had on Linux for Intel's dedicated GPUs. The drivers were so bad at initial release that they managed a near 100x improvement. While that is an impressive speed up... it more so just says they released it in a pretty poor state to begin with.
@frollard
@frollard Жыл бұрын
With nvidia getting all the fanfare the last few years it's really interesting to see intel get some spotlight. This is purely fantastic.
@oculicious
@oculicious Жыл бұрын
this gives me hope for photoreal VR in the near future
@azurehydra
@azurehydra Жыл бұрын
WHAT A TIME TO BE ALIVE!!!!!!!!!!!!
@davidstar2362
@davidstar2362 Жыл бұрын
what a time to be alive!
@lolsadboi3895
@lolsadboi3895 Жыл бұрын
we have the "Enhance image" from every sci fi film now!!
@Uhfgood
@Uhfgood Жыл бұрын
It's pretty interesting, I know it's AI assisted, however, I noticed in the reference footage there was "texture" to the metal in the subway scene (about 5:43) -- that being said, couldn't they do like when generating fake human faces, and have a huge library of sampled textures with which to "build" the textures where the detail was actually lost? As I understand it, an ai samples a lot of human faces and even some cg ones, and then builds the "fake" ones out of that information. Maybe they could do the same with textures in scenes that were lost with the filtering?
@JimBob1937
@JimBob1937 Жыл бұрын
That's actually something I've been working on (personal project). It's tricky to have a neural network that has that much encoded information that still runs in real time. I'd say it's possible to do that now, but in nonrealtime.
@Uhfgood
@Uhfgood Жыл бұрын
@@JimBob1937 - Cool, maybe two more papers down the line eh? ;-)
@Uhfgood
@Uhfgood Жыл бұрын
@@JimBob1937 - I'm just a layman and not very technical, but sort of wish someone would use some ai to take poor looking video and clear it up. I like watching movies from the 1930's, and often they're copies of copies, either old vhs transfers, or just the same move recompressed over and over, until they're a blurry mess. Film sources are usually in the hands of private collectors, even just lost. So we have what's left over, like old tv broadcasts and what not. My contention is an ai could scour the internet looking for decent images of actors and locations and what not, in order to "rebuild" the poor or lost image. Some images could be built from multiple poor copies of the same film/video, and even frame stacking like they do for superresolution. In the end we may not need original sources for preservation and restoration of films.
@JimBob1937
@JimBob1937 Жыл бұрын
@@Uhfgood , there are some AI video upscaling and deblocking software out there now. I use Topaz, which does a very good job... if the source material is somewhat good. The software is excellent at refining edges and even filling in macro details in some areas. However, it fails at properly reconstructing textures if the source material is too low resolution. In those cases it tends to make things look like sharp blobs, haha, where there is enough detail to outline the shapes, but the internal texture remains blurry, creating an unusual look. I have seen some GANN based AI that tries to construct texture detail, but it still has issues unless the textures aren't common, but it doesn't know what is correct or not so it fills in texture detail incorrectly in those areas. So, no silver bullet for lower quality videos. For 3D scenes, my approach has been on a per game basis, where you train a NN on textures specifically selected for a certain game. This isn't as able to be generic, but it does allow you to keep the network size down as you avoid having to encode texture understanding for something that may never be encountered in a specific game. Tricky business for sure, but fun to play around with.
@woodenfigurines
@woodenfigurines Жыл бұрын
will this be a feature of Intel's new graphics cards? interesting times indeed!
@therealOXOC
@therealOXOC Жыл бұрын
I knew why i never changed systems since the 486. I know i can run Crisis on it some day.
@damonm3
@damonm3 Жыл бұрын
Was hoping you’d do another RT vid!
@InfinitycgIN
@InfinitycgIN Жыл бұрын
what the actual fuk, this is unbelievable, the metro scene was unredeemable, but after the denoise, damn Would love to see it included in blender or some app to clear out renders, would make life much faster for animations!
@gawni1612
@gawni1612 Жыл бұрын
When Dr. K is Happy, so am I.
@ZiggityZeke
@ZiggityZeke Жыл бұрын
We are gonna have some pretty looking games soon, huh
Ray Tracing: How NVIDIA Solved the Impossible!
16:11
Two Minute Papers
Рет қаралды 790 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 905 М.
Smart Sigma Kid #funny #sigma #comedy
00:26
CRAZY GREAPA
Рет қаралды 22 МЛН
Secret Experiment Toothpaste Pt.4 😱 #shorts
00:35
Mr DegrEE
Рет қаралды 27 МЛН
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 807 М.
NVIDIA’s New AI Did The Impossible!
9:26
Two Minute Papers
Рет қаралды 202 М.
Nature's Incredible ROTATING MOTOR (It’s Electric!) - Smarter Every Day 300
29:37
Ray Tracing Performance On An Intel GPU Is Weird...
14:03
Dawid Does Tech Stuff
Рет қаралды 732 М.
NVIDIA Just Supercharged Ray Tracing!
6:59
Two Minute Papers
Рет қаралды 152 М.
How Far is Too Far? | The Age of A.I.
34:40
YouTube Originals
Рет қаралды 62 МЛН
How Ray Tracing Works - Computerphile
20:23
Computerphile
Рет қаралды 82 М.
DeepMind AlphaFold 3 - This Will Change Everything!
9:47
Two Minute Papers
Рет қаралды 225 М.
I made a better Ray-Tracing engine
17:38
NamePointer
Рет қаралды 249 М.
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
VA-PC
Рет қаралды 2,4 МЛН
Xiaomi SU-7 Max 2024 - Самый быстрый мобильник
32:11
Клубный сервис
Рет қаралды 522 М.
Looks very comfortable. #leddisplay #ledscreen #ledwall #eagerled
0:19
LED Screen Factory-EagerLED
Рет қаралды 5 МЛН
Как распознать поддельный iPhone
0:44
PEREKUPILO
Рет қаралды 2,3 МЛН