PLEASE READ FOR UPDATES & RESPONSES: Thank you so much for your support! 1. As always, watch our videos in 4k (as in streaming settings) to see our comparison details through KZbin compression. 2. Please remember to *subscribe* so we can *socially compete* with leading tech influencers who push poor technology on to everyday consumers. Help us spread REAL data, empowering consumers to push back against studios that blame your sufficient hardware! * RESPONSE TO COMMUNITY QUESTIONS: 1. We've repeatedly seen comments attempting to explain how Nanite works; arguing that quad overdraw isn't relevant. That's the whole point of the video. Comparing Nanite (which doesn't use quads) to quad overdraw is the only contextually fair comparison. Additionally many have claimed that Nanite has a "large but flat and consistent cost". This is utterly false. Nanite can and does suffer from its own form of overdraw (though not quad related). A major issue that people are missing involves Virtual Shadow Maps. Which are tied to Nanite. Nanite's shadow method not only re-renders your digital scenes at massive resolutions but these maps are also re-drawn under basic scenarios typical in games. Such as moving the CAMERA's position, shifting the SUN/MOON, or having moving objects or characters spread across your scene. Does that SOUND like good performance to you? News flash…It's not. Even Epic Games admitted VSMs were terrible for Fortnite but instead accepting it wasn't fundamentally a good fit. They "bit the bullet" and use it anyway. But they didn't really bite anything...consumers did. 2. To those defending Nanite because it saves on development time. We are fully aware of that. We have constantly stated this in previous videos and comments. We have also said this is a great thing to work towards. What these ignorant people fail to grasp is that Nanite is a FORCED alternative, due to a workflow deficiency in legitimate optimization for meshes. 3. Like we stated in our ‘Fake Optimization Video’, Pro-Nanite users fail to recognize the CONTRADICTION Nanite causes in "visual fidelity". If you are using a technology that has such a massive domino effect on performance that you end up having to use a blurry, detail-crushing temporal upscaler to fix performance then you end up smearing all the detail anyway for a distorted presentation. Then if you were to explore CHEAP deferred MSAA options. All that subpixel detail possible in Nanite and VSM's gross use of soft shadow sampling is promoting temporal aliasing/reliance on flawed TAA/SS. 4: The test shown at 3:36 shows a workflow deficiency rather than an implementation issue. Unreal *does support per-instance LOD selection but the engine defaults to ISM(Instanced Static Meshes) which doesn't support per instance LODs.* But UE5's HISM(Hierarchical Instanced Static Meshes) does but the developers have not made this as accessible and have not produced a system that combines all these meshes with precomputed asset separation culling. Before some people complain about "duplicated" assets and increased file size, we encourage viewers to research how Spider-Man PS4's open worlds where managed.
@QuakeProBro5 ай бұрын
@@HANAKINskywoker "...to a problem that never NEEDED TO EXIST." is what he said. We would not need DLSS that badly, if fundamental optimization methods would be pushed by the industry, instead of upscalers.
@QuakeProBro5 ай бұрын
@@HANAKINskywoker Yes it is true, as the games get bigger and bigger, optimization gets harder. But this is why things like the proposed AI tools would really shine. As long as Nanite is like "You can now render a rock with 10 million tris and the performance is (in context of our other new features) better than before, but it really isn't, because the base cost is exponentially higher. But hey, here is the magic fix: Upscalers!" and not "You can now render 10 million rocks with near infinite draw distance, and the base cost is the same or even less than before.", it creates a problem that shouldn't exist. There is much more work that needs to be done and Epic should provide support for those who want a smooth transition, but instead they only really push the features that sound great in marketing. My own project went from 120fps in UE4 to about 40fps in UE5 on a map that only has a fucking cube and floor. Upscalers are definitely not evil and can be super useful, but they just hurt the visuals too much for them being the only answer you get, if the performance is bad.
@V1vil5 ай бұрын
Comparisons details were still noticeable while watching on Xbox 360 in 720p. :)
@Navhkrin5 ай бұрын
@@RandoKZbinAccount Google -> unreal.ShadowCacheInvalidationBehavior
@Navhkrin5 ай бұрын
@@RandoKZbinAccount KZbin is removing my links but simply search shadow cache invalidation behaviour
@pyrus28145 ай бұрын
DLSS was originally conceived as a means to make real-time raytracing possible. It's sad to see how many games today rely on it for stable frames with mere rasterization.
@gudeandi5 ай бұрын
tbh 99% of all players (especially in single player) won't see a difference.. and that's the point. Get 90% of the result by investing 10%. The last 10% cost you sooo much more and isn't often worth it. imo
@beetheimmortal4 ай бұрын
I absolutely hate the way modern rendering went. They switched from Forward to Deferred, but then broke anti-aliasing completely, introduced TAA which sucks, then introduced DLSS, which is now MANDATORY in any game to run at a sort-of acceptable framerate. Nothing is optimized, and everything is blurry and low-res. Truly pathetic.
@dawienel11424 ай бұрын
@@beetheimmortal agreed, can't believe that high end RTX40 series GPU's really struggle to run the latest games at acceptable settings and performance especially compared to the hardware we had with 2010-2015 games which all generally looked good and run well on the hardware of their time. I feel like we are at the point of gaining very slight graphical fidelity for way too much cost these days.
@techjunky98634 ай бұрын
@@beetheimmortal only way to play games with proper resolution now is to buy a 4k monitor and render at 4k. Then you get somewhat a similar image quality as we had with foreward rendering
@kyberite4 ай бұрын
@@griffin5734 DLAA is amazing
@BunkerSquirrel5 ай бұрын
2d artists: worried about ai taking their jobs 3d artists: worried ai will never be able to perform topology optimization
@とふこ5 ай бұрын
Me: want a robot to take my job 😂
@GeneralKenobi694205 ай бұрын
Who let the furry out of the basement 💀
@dimmArtist5 ай бұрын
3d artists worried that hungry 2d artists will become better 3d artists and taking their jobs
@mainaccount8885 ай бұрын
@@dimmArtistsaid no one ever
@roilo85604 ай бұрын
@@GeneralKenobi69420bro has 69420 in his name in 2024 💀
@SaltHuman5 ай бұрын
Threat Interactive did not kill himself
@ook_3D5 ай бұрын
For real, dude presents incredible information without bias, gotta piss alot of AAA companies off
@bublybublybubly5 ай бұрын
@@ook_3D Epic and some twitter nerds may not be happy. I don't think the game companies care 🤷♀ He might just bring them a different solution for saving money on dev time & optimizations. Why would they be mad about someone's who is this motivated to give them another option in their corporate oppression toolbox for free?
@168original75 ай бұрын
Timmy isn’t that bad lol
@mercai5 ай бұрын
@@bublybublybubly Cause this someone acts like a raging asshat, makes lots of factually wrong claims, offers no solution and then tries to crowdfund to "fix" something - all from this place of being a literal nobody with zero actual experience or influence? Yeah, it's not maddening, but quite annoying.
@gdog81705 ай бұрын
I can't lie this is not funny, doesn't mean that he is revealing info like this that he will be targeted, hopefully that never happens
@iansmith33013 ай бұрын
The fact that Epic is telling developers to go ahead and use 200GB 3D models without optimization because Nanite 'can' render them in the engine should send off huge red flags because artists will think that it's fine to not optimize.. Everyone's SSD/HDD is fked as well, 300GB game installs could be the norm.
@SubThoRed2 ай бұрын
You mean "everyone's SSD" :) It's almost mandatory nowadays. These games refuse to run smooth on HDD.
@jwhi4192 ай бұрын
@@SubThoRedif a character alone is 10GB. Then yeah you'll need an hdd everytime you want to switch what game is on the ssd. Games are going to be 200GB or extremely cou bound because it's being used to compress and decompress over and over and over. That's a problem because cpus are not getting much better. We might never have something 4 times a strong as a 9800x3d. Meaning a 120fps will always be stuck there even when 4k 2000hz displays exist
@PauLtus_BАй бұрын
I think nanite is really amazing tech. …but it is essentially an incredibly expensive optimisation technique that's beneficial for horrendously unoptimised scenarios. There's something paradoxical as it seems designed to fix a problem that shouldn't exist in the first place.
@kennichdendennАй бұрын
Btw: I'd love to have the option available to use higher-res textures - like way back when with the original skyrim, where 2k textures were packaged separately as a free dlc. If you had the graphical horsepower, you could run it, but you did not have to download it if not needed.
@LuckyFortunes-b3qАй бұрын
it's not that hard to do retopology then bake normal maps. It may take some time but it's better than taking up huge amount of space on the hardrive.
@sgredsch4 ай бұрын
im a mod game dev that worked with the source engine. and looking back how we optimized for performance manually with every mesh, texture, shader, and seeing how modern studios deliver the worst garbage that runs like a brick also makes me angry. now we start throwing upscaling at games that should run 2x as fast on native resolution to begin with. we are subsidising the sloppy or not existing game optimization by overspending on overbuilt hardware that in return gets choked to death by terrible software that is a result of cost cutting measures / lazyness / manipulation. nvidia is really talented in finding non-issues, bloating them up and then selling a proprietary solution to an issue that shouldnt be one in the first place. they did it with physX, tessellation, gameworks, raytracing and upscaling. the best partner in crime is of course the one engine vendor with the biggest market share - epic with their unreal engine. fun fact: the overdraw bloat issue goes back to when nvidia forced tessellation in everyones face - nvidia made their gpus explicibly tolerant to sub pixel geometry spam (starting with thermi, i mean fermi), while GCN couldnt handle that abuse. the witcher 3 tessellation x64 hairworks sends its regards, absolutely choking the r9 290X. its a shame what he have come to.
@e.s.r58094 ай бұрын
I've got tired of seeing "if it's slow, your hardware isn't good enough" for graphics on par with decade-old releases- struggling on the same tech that ran those games like butter. You shouldn't *need* to spend half a year's rent on a gaming PC, to compensate for memory leaks and poor optimisation. The only winners are studio execs and tech shareholders. It's convenient to call your customers poor, instead of giving your developers time and resources to ship viable products.
@googIesux4 ай бұрын
Underrated comment. This has been the elephant in the room for so long
@T61APL894 ай бұрын
and yet people still buy the games in record numbers, this is what capitalism rewards. throwing shit at the wall and accepting the shit smeared remnants.
@Online-j8e4 ай бұрын
"starting with thermi" lol thats funny
@cptairwolf4 ай бұрын
You're not wrong in that too many studios are taking the easiest way out and skipping any sort of optimization but let's not blame new technology for they. I'd take well optimized nanite structures or micro polygon tech over LODs any day. LODs are time consuming to create, massively increase storage requirements and just plain look ugly. I'm not sad to see them get phased out.
@annekedebruyn77972 ай бұрын
Not supporting instancing is horrifying to hear. We do that even for traditional renders.
@michaelzomsuv3631Ай бұрын
Yes and instancing is literally 2010 technology. To not have it in 2025 is the embodiment of incompetence.
@sebbbi25 ай бұрын
Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times. While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs). It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render. However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We werent. I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.
@user-bl1lh1xv1s5 ай бұрын
thanks for the insight
@tubaeseries57055 ай бұрын
the issue is that quad overdraw is not a big deal, modern GPUs are never limited by the amount of triangles they output, it's always shaders, and nanite adds a lot of additional work to the shader pipeline, which is already occupied as hell, for standard graphics with reasonable triangle counts nanite just doesn't make any sense, it offers better fidelity than standard methods, but performance is not what it can offer
@minotaursgamezone5 ай бұрын
I am confused 💀💀💀
@torginus5 ай бұрын
Not an unreal expert, but from what I know of graphics, quad rasterization is unavoidable, since you need the derivatives for the pixel shader varyings that are needed for things like texture sampling. Honestly it might make sense to move beyond triangles to things like implicit surface rendering (think drawing that NURBS stuff directly) for the stuff nanite tries to accomplish.
@tubaeseries57055 ай бұрын
@@torginus rendering nurbs and other non-primitive types always goes down to rendering primitives anyway, CAD software always processed nurbs into triangle meshes using various methods that produce a lot of overhead, GPUs are not capable of efficiently rendering anything else than primitives, we would need to build new hardware standard for rendering them, and that's not really reasonable
@shamkraffl60502 ай бұрын
I'm not surprised if NVIDIA pays devs to not optimize games so that they can sell their gpus.
@S0up3rD0up3r99Ай бұрын
NVIDIA's biggest customer is in deep shit for fraud.
@jrodd13Ай бұрын
I feel like devs are just overworked, underpayed, and rushed nowaways.
@rakedos9057Ай бұрын
It has been the case on all sides (not just NVIDIA) for decades. There is nothing new.
@deamooz9810Ай бұрын
I like that theory lol
@metabolic2057Ай бұрын
Thats whats been happening 😮
@unrealcreation075 ай бұрын
10 years Unreal developer here. I've never really been into deep low-level rendering stuff, but I just discovered your channel and I'm glad your videos answered some long-lasting questions I had, like "why is this always ugly and/or blurry, no matter the options I select?? (or I drop to 20fps)". With time, i have developed a kind of 6th sense about which checkbox will visually destroy my game, or which option I should uncheck to "fix" some horrible glitches... but I still don't know why most of the time. In many cases, I just resign and have to choose which scenario I prefer being ugly, as I can never get a nice result in all situations. And it's kind of frustrating to have to constantly choose between very imperfect solutions or workarounds. I really hope you'll make standards change!
@fleaspoon3 ай бұрын
you can also learn what those checkboxes actually do and solve the issues by yourself for your specific needs
@papermartin879Ай бұрын
how are you a 10 years unreal dev doing low level rendering stuff yet couldn't figure that out yourself
@vikan3842Ай бұрын
@@papermartin879 "10 years Unreal developer here. I'VE NEVER really been into deep low-level rendering stuff" Read it again, smart guy
@JesterFlemmingАй бұрын
That's kind of a joke, that someone with over 10 years in an Engine has no fcking idea, how that stuff actually works.
@DemsWАй бұрын
@@JesterFlemming Engine dev covers way more than low level rendering, let's not act like douches because someone admits they don't know everything.
@doltBmB5 ай бұрын
Key insight every optimizer should know: draw calls are not expensive, context-switching is. The more resources that a draw call shares with adjacent draw calls the cheaper it is to switch to it. If all draw calls share the same resources it's free(!). Don't worry about draw calls, worry about the order of the draw calls.
@h0bby234 ай бұрын
There is the issue of cpu bound to send work with few polygons drawcall but otherwise on modern hardware yes
@BlueBeam104 ай бұрын
So what you say is, if I spawn 10.000 of the same chairs, I don't even need to have them as instanced, or to have them merged into one mesh because the 10k draw calls will equate to one? I don't know man, I tend to disbelieve that...
@doltBmB4 ай бұрын
@@BlueBeam10 theoretically if each chair can be rendered in a single drawcall, I guess, which would be a very simple chair. instancing is good for more complex renders.
@BlueBeam104 ай бұрын
@@doltBmB But I though draw calls would happen for each non instanced mesh regardless of the complexity right? Since that chair isn't instanced, why would the engine assume the same draw call can apply to those other meshes?
@doltBmB4 ай бұрын
@@BlueBeam10 the engine doesn't assume anything, most engines are designed with 0 regard for these facts. it is up to the graphics programmer to batch their drawcalls appropriately. most engines implement some static batching at best which is an absolutely ancient way to do batching, requires some complicated preprocessing and eats bandwidth and memory
@Bitshift11253 ай бұрын
9:20 This dithering is so, so common nowadays and it looks TERRIBLE! Hair especially just looks like a noisy mess in most new games, and there is no way to turn your settings up enough to make them look good due to dithering. Then everyone just tells you that TAA and upscaling fix it. I don't want either of those technologies active, because they ruin the final image no matter what you do. It drives me nuts that people say "Oh, there's a visual issue? Turn on a bigger one to fix it"
@typingreallyfast29 күн бұрын
Most horrible thing is when it's TAA or no AA. Drives me up the wall. I remember MSAA being far superior to TAA and FXAA, but apparently no longer a thing because games these days are made with deferred rendering pipeline (vs. forward rendering pipeline) and framegen technology making it "obsolete". For what it's worth I think framegen is part of the problem or a symptom of the larger problem in the gaming industry. But I'll leave for now.
@Bitshift112529 күн бұрын
@@typingreallyfast I disagree, kind of. I think the worst (and most common thing) is when you just can't even turn TAA off in the first place. I'd much rather have no AA than TAA, because TAA is just that bad. It's not that hard for them to just give you something like SMAA, or even the awful FXAA. I do agree with you on frame generation. It's almost always a visual downgrade, and now that it exists, game perform even worse to the extent where the frame generation artifacts become highly noticeable. It's just awful.
@hungryhedgehog42015 ай бұрын
So if I understand correctly: Nanite results in a performance gain if you just drop in high poly 3D sculpts without any touchups, but results in a performance loss if you put in models designed with the industry standard and use industry standard workflow regarding performance optimization?
@AndyE75 ай бұрын
I swear the point of Nanite was to allow the artists and level designers to just focus on high quality assets and scenes because nanite would do the optimisation for them. It was to stop the need for them to think of optimisation, LODs, how much can the console render in a scene etc because it will handle all of that for you allowing you to just focus on the task at hand. In theory you could even develop with Nanite and then do proper optimisations afterwards.
@hungryhedgehog42015 ай бұрын
@@AndyE7 tbf it seems to do that but that means that you move from the industry standard to this new approach that works with no other engine pipeline. Which then binds you to the Unreal Engine 5 ecosystem, which then obviously benefits Epic. That's why they push it so hard.
@Noubers5 ай бұрын
The industry standard is significantly more labor intensive and restrictive. It's not a conspiracy to lock in people to Epic, it's just a better work flow overall that other engines should adopt because it is better.
@Armameteus4 ай бұрын
@@Noubers Except it's not because that then off-loads the rendering overhead to the end-user. To render anything in nanite more quickly than through a traditional rendering pipeline would require the end-user to have the hardware necessary to accommodate it. This is simply unacceptable and shouldn't be the case. The end-user should, within reason, be able to expect their product (like a game) to be able to function on their hardware so long as it's within decent generational tolerances because the developers should have put in the effort to accommodate the end-user. That's part of the selling-point of video games. It's _their job_ to make a product worth our purchase. It would be like if Epic created an entirely new type of internal combustion engine that was incompatible with every single vehicle on earth, but was marketed as "easier" to build. Epic then forces factories they have contracts with to start manufacturing this new engine type because it's "easier" for _them_ to produce it - but it's still incompatible with all vehicles, everywhere. This then means the car manufacturers need to completely upend and rebuild their entire manufacturing process to accommodate this new engine design, which costs them a ton of money. The cost of this transition then gets off-loaded onto the regular customer - you and me - that's trying to buy a new car, because the cost of manufacturing these new cars is far higher now due to the alien design of their engine, which was the fault of Epic forcing this new engine on all of their manufacturers. And, even if you end up buying it, it will still run _worse_ than a comparable model car with a traditional engine design; it cost you more and you got a worse product out of it. Epic is forcing nanite as the default rendering pipeline going forward, meaning *you can't opt-out of it!* As a developer, this means the overhead of rendering your game falls to the end-user. As a result, your game can only be played by users with the absolute latest, bleeding-edge machines, because nanite only _works_ on those machines, due to its insane overhead. This instantly cuts your potential playerbase to a fraction of its original potential because the cost of purchasing a machine capable of rendering your game is going to be astronomical, not just immediately, but _exponentially_ into the future (as games attempt to render more and more complexity through nanite, chasing the dragon of "photo realism"). The only parties benefiting from this are Epic (for obvious reasons) and large development companies that have _contracts_ with Epic (for the same reasons). But small studios or indie devs? You're screwed; optimising in nanite is currently impossible and, as an indie dev, you probably don't have the hardware necessary to even render your _own game._ And the end-user? You're _double-screwed;_ you have to front the cost of the hardware to run the game _and_ the exponentially-increased cost of developing that game that anyone outside of Epic's sphere of partners had to eat in developing their game! As a result, both games _and_ the hardware needed to render them will cost more for the end-user to purchase (unless you're best buddies with the executives at Epic). Nanite is a complete sham and a waste of effort and resources, built upon self-imposed problems that didn't need to exist, with shoddy "solutions" to those problems, which then create _new_ problems without even fixing the _old problems_ to begin with! It's just Epic's way of pushing out anyone that doesn't have contracts with them, practically requiring corporate nepotism in order to operate within their market. It exclusively benefits them and their friends and they know it. InB4: "StOp BeInG PoOr!!1" because that's the _only_ argument against this nonsense.
@TheReferrer724 ай бұрын
@@Armameteus History is going to proof you dead wrong and its a pity so many of you dev's just don't get that hardware always always catches up. Artists and designers should not be making LOD's its a kludge, they should not be combining meshes its a kludge, baking in lights and whatever tricks have to be done when algorithms can do the work. Artists and designers should be focussing on making their games look and play good.
@MondoMurderface4 ай бұрын
Nanite isn't and shouldn't be a game developer tool. It is for movie and TV production. Unreal should be honest about this.
@Rev0verDrive4 ай бұрын
Been saying this since day one release w/testing.
@marcinnawrocki14374 ай бұрын
Most of new "marketing buzzword friendly" stuff in unreal is not for games. If you as game dev want average steam PC run your game you will not use those new flashy systems.
@Ruleta_233 ай бұрын
Games are using nanite, don't talk if you don't know the topic, what you say is absurd anyway.
@0osk3 ай бұрын
@@Ruleta_23 They didn't say it wasn't being used in games.
@Rev0verDrive3 ай бұрын
@@Ruleta_23 Every game I seen released with it runs like shit on highend systems. You have to use DLS/FSR to get 60-70FPS
@sideswipebl5 ай бұрын
No wonder fidelity development seems so slow since around 2016. It's getting harder and harder to tell by looking how old games are because we already figured out idealized hyper-realism around 2010 and have just been floundering since.
@MrGamelover235 ай бұрын
Yeah, imagine the optimization of back then with ray tracing. It might actually be playable then.
@metacob5 ай бұрын
GPU performance is still rising exponentially, but I literally can't tell a game made today from one made 5 years ago. As a kid my first console was a SNES, my second one an N64. That was THE generational leap. The closest to that experience we got in the last decade was VR, but that still wasn't quite the same. To be honest though, it's fine. I've heard the word "realism" a few too many times in my life. Now it's time for gameplay and style.
@Fearzzy5 ай бұрын
@@metacob if u want too look at the potential of todays games just look at bodycam game, you wont be making that 5 years ago. But i see your point, RDR2 is still the most beautiful game i've played (other than faces etc.), but its more to its style and attention to detail rather than "raw graphics"
@arkgaharandan58815 ай бұрын
@@metacob well, i was playing just cause 3 recently, the lighting has some ambient occlusion and reflections so it looks bad compared to modern standards, you can see the foliage lod appearing in real time and unless you are running in 4k the antialiasing it has smaa 2x is not good enough to hide the countless jaggies. Id say a better comparison is with 2018 games, also the 4060 is barely better tan the 3060 with more ram, low to mid range needs to improve alot.
@slyseal20915 ай бұрын
@@arkgaharandan5881 Just cause 3 is 9 years old. "5 years ago" is 2019 buddy
@chrisjohnson7255Ай бұрын
This material is so hard to understand and in depth, I love it… can’t wait to keep learning!
@gandev52855 ай бұрын
This may or may not be true, but I actually believe Unreal Engine's quad-overdraw viewmode has been broken for a long time (atleast beyond 4.21). In a Robo Recall talk they talk about the impact of AA on quad overdraw and show that if you enable MSAA your quad overdraw gets significantly worse. Now if you run the exact same test quad overdraw IMPROVES significantly. So unless Epic magically optimized AA to produce less overdraw, the overdraw viewmode is busted. I tested the same scenes in 5.2 and 4.21 and the overdraw view was much worse in 4.21 with it actually showing some red from opaque overdraw (all settings the same). I'm not even sure if opaque overdraw can get beyond green now. I would suspect the overdraw you show from extremely high poly meshes should actually be significantly worse and mostly red or white.
@tubaeseries57055 ай бұрын
msaa by nature causes more overdraw because it takes multiple samples for each pixel, with varying amount depending on the settings, so in some cases where only 2pixels are occupied, meaning 50% overdraw, with msaa it could be for example 6 pixels out of 16, meaning 60% overdraw etc.
@futuremapper_5 ай бұрын
I assume that Epic cheats the values when MSAA is enabled as that will cause overdraw in it's nature
@JorgetePanete5 ай бұрын
at least*
@neattricks76785 ай бұрын
It is true. Unreal sucks and everything about it is fucked
@FourtyFifth5 ай бұрын
@@neattricks7678 Sure it does buddy
@florianschmoldt86595 ай бұрын
The current state of "next gen visuals" vs fidelity is indeed questionable. Many effects are lower res, stochastically rendered with low sample counts, dithered, upscaled. Compromises on top of compromises and frame generation in between. Good enough in 4k and 60fps but if your hardware can't handle it, you'll have to live with a blurry, pixelated, smeary mess. My guess is that Nvidia is fine with it. As much as I like Lumen&Nanite in theory but I'm not willing to pay the price. To be fair, it isn't the worst thing to have next gen effects available but there is a huge disconnect what gamers expect a game to look like, based on trailers and how it feels to play in 1080p at 20fps. JFZfilms defines himself as filmmaker and isn't great at communicating that he and his 4090 doesn't care much about realtime visuals or optimization. Tons of path tracing vs Lumen videos and a confused game dev audience when they accidentally learn that both went through the render queue with maxed settings.
@snark5675 ай бұрын
Gamers confuse realism and high fidelity visuals for good graphics. Meanwhile a lot of devs use realism and graphical advancements as a crutch because they lack the imagination to know how to make a game that looks good without these features.
@florianschmoldt86595 ай бұрын
@@snark567 I'm a graphic artist myself and as much as I love a good art direction, realism definitely has it's place. But I get the point. It for sure shouldn't be the only definition of next gen visuals. It became much easier to gather free photoscanned models than creating your own in an interesting style. Devs rely on nanite over optimized assets and even games with mostly static light sources use Lumen over "good old" lightmaps. Even if it could look just as good and makes a difference of 120 vs 30fps. And Nvidia is like "Here are some new problems...how about a 4090 to solve them?"
@zealobiron8 күн бұрын
Love when someone is genuinely passionate about their work. Holds a level of integrity when it comes to what they consider a finished product.
@ozonecandleАй бұрын
Commenting on all your vids. Youre explaining so clearly, and robustly what all of us have been thinking/feeling for a while now.
@Al.j.Vasquez15 күн бұрын
You explained it perfectly, they don't want to fix the problem, they want to sell you the remedy, Nvidia needs DLSS to be the best upscaler, so if there are issues that look more apparent on other AA solutions, it's better for DLSS. I dislike when developers are too high on their horse that they won't listen to someone calling out a problem with their products.
@bits360wastaken5 ай бұрын
10:49, AI isnt some magic silver bullet, an actually good LOD algorithm is needed.
@michaelbuckers5 ай бұрын
He's talking about a hypothetical AI based LODder with presumably better performance than algorithmic LODders. Which is a fair guess considering that autoencoding is the bread and butter of generative AI. I hazard a guess that you can adapt a conventional image making AI to inpaint an original mesh with a lower poly color-coded mesh from various angles, and use these pictures to reconstruct a LOD mesh.
@yeahbOOOIIÌIIII4 ай бұрын
What he is suggesting is amazing. I have to jump through hoops to get programs like reality capture (which is amazing) to simplify photogrammetry meshes in a smart fashion. They often destroy detail. This is an example where machine learning could shine, making micro-optimizations to the LOD that lead to better quality at higher performance. It's a great idea.
@MarcABrown-tt1fp3 ай бұрын
@@yeahbOOOIIÌIIII Of course the meshes would need to be hand made still. Generative AI doesn't seem to work well with limits like quads imposed.
@bionic_batman3 ай бұрын
AI (machine learning) is just a way to approximate the algorithm you don't know In this particular case it would be better than Nanite because it does not need to be baked into the engine and can be replaced with manual/algorithmically-produced LODs when needed
@jcm26062 ай бұрын
It's just that, though, an idea. He doesn't actually provide any information on how _exactly_ a neural network could be trained to do this, he's basically just saying "just throw AI at it!" and pointing to RTX Remix using AI to convert flat albedo to material data, which is a different problem to solve (and we already have traditional solutions for this, though they're pretty bad). Just throwing an arbitrary neural network at a problem isn't good enough to deride Nanite and claim that an alternative exists, especially if you don't provide anything to back that claim up.
@Stratos19882 ай бұрын
Awesome video ! I had difficult time understanding details as I'm just a gamer, but it's good to know that for once most of the internet is correct about something. This "thing" being poor performance for no real visual benefit.
@GugureSux5 ай бұрын
This explains both why I generally despise the modern UE games' looks (so much noise and blur) and why especially UE5 run like ass, even under decent HW. And since so many devs seem to use lazy upscalers as their "optimization" trick, things only get worse visually.
@kanta321005 ай бұрын
No visible LOD pop in is nice though.
@bennyboiii11965 ай бұрын
The blurriness is due to TAA tbf, which is a scourge. This is why I use Godot lmao
@ThreatInteractive5 ай бұрын
@@kanta32100 Not true, you will still get pop and LODs have had various ways on reducing pop-visibility(just not in UE5) in a non-flawed TAA dependent way.
@PabloB8885 ай бұрын
@@bennyboiii1196 TAA image looks blurry, but a good sharpening mask can help a lot. I use reshade CAS and luma filters. I have been using this method for the last couple of years on my old GTX1080. Now however I bought the RTX4080S and can get much better image quality by using DLSS balance and DLDSR2.25 (80% smoothness) at the same time. I get no performance penalty, and the image quality destroys native TAA. What's more if DLSS implementation is very good I can use DLSS ultra performance and still get sharper image compared to the native TAA and framerate is 2x better at this point.
@cdmpants5 ай бұрын
@@bennyboiii1196 You use godot because you don't want to use TAA? You realize that TAA is optional?
@railerswim21 күн бұрын
What devs don’t understand is in their complaining about informed enough consumers, they are telling us to spend hundreds to thousands of dollars to be able to play them in a decent fashion. If the industry is going to force RT, they have to optimize, have to. Or they will lose on sales strictly because people can’t play. Some devs have said this is the same thing that happened with introducing having a discrete gpu as a requirement. I highly disagree with this assessment. Why? Because RT has a negative impact on performance to making lighting “easier” for devs to where they don’t have to bake it.
@Kinos141Ай бұрын
I like this channel because they give concrete reasons why Unreal is making games worse, other than "UE5 bad." Most others have no idea what they are talking about when it comes to optimization.
@Dante02d12Ай бұрын
Even when people don't have the technical skills to properly explain their feelings, it's still valuable feedback. Just like you don't need to know how to cook to say when something you're served tastes like shit.
@Igivearatsass7Ай бұрын
@@Dante02d12 Yeah, but it is way more valuable when you can pinpoint what went wrong.
@L1qu1d_5h4d0w2 ай бұрын
Thank you so much for pointing these obvious flaws of UE5 out. I am not a programmer/dev but I knew from the get go that UE5 is a fail for consumers. UE5 is the sole reason why my 20gb VRAM 3090Ti makes any sense nowadays and the avg is what, 6-8gb? Games look sterile and all the so called high quality assets look awefull coz upscaling and anti aliasing is disastrous in UE5 or at least what the industry chose to utilize. why even bother with all of this when the avg PC can't even run 1080p at high setting in such games... catering to the top 5% PC-users and not even delivering at that regard. My RIG is suffering from these games lol... 3090Ti, 13600k and 32gb of RAM ain't enough for AAA games at max setting at 1440p 165hz which is the "standard" for "high end" to this day. I also believe quite frankly that UE5 enables lazyness in devs and it takes away uniqeness unless lots of resources are poured towards that subject. Hence again, games look sterile and similar when made on UE5 without the experience regarding the engine and I would also add passion to the equation. Non-consumer oriented management doesn't help the industry nor the consumers to add salt to the wound. Many studios are shutting down but it just doesn't seem like the industry is understanding the whys... We want well optimized games before anything else nowadays. Gamers are getting ever so more tired of paying full price for games we consider not finished and many of them won't be finished before the end of its lifespan. Espeacially "older gamers" who still remember the times were CD's were a thing and updates/patches were pretty much non-existent which forced devs to really pollish their games before putting 'em up on the shelves dispise this behaviour and the younger ones catch up in terms of intolerating this. It also doesn't help that AMD can't keep up with all of this "change" graphics wise and Nvidia scruting over consumers. like, if AMD can run frame gen. on my 3090Ti, Nvidia should be able too, but they rather pay-lock it with making it exclusive to the next gen. (40 series) which again got more expensive... Soon we will need 4k+ PC's to run games at max settings... ridiculous considering how it used to be and how consoles are (although even they get more and more expensive so... yeah) I think I speek for many gamers out there and hopefully this input from me might be helpful for devs to grasp how we see this topic.
@ThiagoVieira915 ай бұрын
Acquiring the same level of rage this guy has for bad optimization, but for putting effort into my SE career, I can single handedly lead us into the the singularity. MORE! MORE! MORE!
@JorgetePanete5 ай бұрын
the the
@DarkSession62085 ай бұрын
I have 100 other topics around Unreal Engine that make me rage about misinformation just like he rages about Nanite. The Nanite thematic i posted MULTIPLE times on Forums and Reddit to warn people how to not use it, and show how how it should be done. I was researching this topic since version 5.02. Nobody cared. Like i said there are 100 other similar topics (Blueprints, general functions, movement, prediction etc.) which are simply explained wrong by epic themselve and then aknowledged by users. If you google : Does culling really not work with nanite foliage? the first comment is mine, i was posting there since almost 3 YEARS and repeating myself just for people not listening.
@IBMboy5 ай бұрын
I think Nanite is innovative technology but it shouldnt replace old style of rendering for videogames as its still experimental in my opinion
@matiasinostroza18435 ай бұрын
yeah and its not made for optimization but for the sake of look, how it looks its beauty, there is almost no transition between lods, so for example in normal games when you get too far u can see how the mesh changes to lower poly models but with nanite this is almos imperceptible, so in that case is "better", thats why is being used for "realistic graphics".
@Cloroqx5 ай бұрын
Baseless opinions by non-developers. "Old style of rendering videogames". What is your take on the economy's sharp downturn due to a lack of rate cuts by the FED, opinionated one?
@vendetta14295 ай бұрын
@@Cloroqx You're coming off as the opinionated one, if you didn't know.
@pizzaman115 ай бұрын
Also has lower memory footprint and you don’t need to worry about creating lods. Which is perfect with large games with a large amount of unique models.
@inkoalawetrust5 ай бұрын
@@Cloroqx What are you even talking about.
@salatwurzel-43885 ай бұрын
Remember the times when {new technology} was very ressource intensive and dropped your fps but after a relative short time it was ok because the hardware became 3x faster in that short time? Good times :D
@histhoryk26482 ай бұрын
Can it run Crysis meme
@XCanG5 ай бұрын
I'm not game developer, but there is some questions that I have to ask and some personal opinion as a gamer. 1. The main point I remember for introducing Nanite at first was that you can make models faster by not spending time on creating LODs. So in some of your examples comparison: how much time do you need to create same model with Nanite vs same model with LODs? Considering that all fresh games that being made with a bias in realism require a lot of details I think at a scale it will make a difference. 2. May be because I'm not into this field I don't head that, but I really didn't hearing anyone who would render model with billions of triangles. The only closest example was Minecraft clones who was rewritten in C/Rust etc. who tried to achieve large render distance. Other rendering scenes where 1 frame make take hours of render time, so it was not realtime, but I really didn't hearing anyone else showing examples like that. I can imagine that you are some Senior game developer with at least 6 years of experience, but how many game devs also know about this optimizations? I can't imagine even more than 25%. 3. Let's assume that you can optimize them better, does UE5 allow you to handle optimization by yourself? I imagine that UE4 do, but what about UE5? If answer is yes, then this problem is more arguing about better defaults where you have manual optimization with a cost of your time vs nanite auto-optimization with faster creation time. 4. As a player I want to point out that many latest titles comes with bad optimizations, so much that gamers starting to hate their studios. My personal struggle was with Cities: Skylines 2. They use Unity engine where they definitely have all this abilities for optimization, but somehow they released lagging p* of s*, where some air conditioner on a building that you see from a fly height (far away) had 14k triangles and some pedestrians have 30k triangles. Considering that they can't optimize properly I believe if incompetent devs like them just use Nanite the game wouldn't be this laggy. For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals.
@XCanG5 ай бұрын
@@miskliy1 I see.
@MiniGui985 ай бұрын
"For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals" Fair point, although the "manual" technique has been used since basically the beginning of fully 3D games and had been mastered not by "few individuals" but by the whole industry at some point. Throwing everything in the nanite pipeline just because it's simpler and faster is a false excuse once you know that the manual, traditional techniques will get you better performances with little to no difference in visual fidelity. Even better, the extra performance you free with the manual LODs allow you to minimize the need for FSR or DLSS, both of which indisputably degrades image fidelity as well. Performance tests with FSR/DLSS enabled are basically a lie over what the performances of the raw game are. Native resolution with more traditional anti-aliasing techniques should still be the norm as it always has been. Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that.
@Megalomaniakaal5 ай бұрын
@@MiniGui98 Majority of games out there were rather unoptimized messes that Nvidia and AMD optimized on 'game ready' drivers release level in the DX11 and older days.
@Jiffy3605 ай бұрын
About that final point about CS:2, that was exactly their plan. They were going to use Unity’s competitor to Nanite, but then Unity delayed or cancelled it, so they were stuck creating a brand new LOD system from scratch at the last moment.
@doltBmB5 ай бұрын
There are automatic LOD tools for a very long while now.
@GraveUypo5 ай бұрын
this is the type of content i wish people would watch, but they don't. i'm tired of hearing that nanite is magic, that dlss is "better than native" and whatnot. when i made a scene in unreal 5, it ran like trash on my pc with 6700xt at the time, and there was almost nothing in the scene even with 50% render scale. it was absurdly heavy for what it was.
@SuperXzm5 ай бұрын
Oh no! Think about shareholders! Please support corporations play pretend!
@chengong3885 ай бұрын
DLSS is better than native because it includes anti aliasing and native does not…
@FlippingLuna5 ай бұрын
Just return to forward shading or even mobile forward renderer. It's kinda good and pity that epic made good optimization on mobile forward renderer, but still not ported it to desktop forward
@Jakiyyyyy5 ай бұрын
DLSS is INDEED sometimes better than native because it uses its own antialiasing than the default poorly implemented TAA that blurring the games. If DLSS makes the games sharper and better, I would take it over forced-TAA. 🤷🏻♀️
@dingickso40985 ай бұрын
DLSS is better than native, they think, because they tend to compare it to the blurfest TAA that is sometimes force enabled.
@coderman2754Ай бұрын
This channel is so awesome, it rekindles my interest in video game graphics. I used to think there is not much problems left to solve in the space, but I did feel a bit sad when all the studios announced to move to UE (especially CDPR). Now I know it's worse then I thought...
@dwupacАй бұрын
This channel is much needed. Its in our best interest to support it as gamers.
@sandybeach953 ай бұрын
I absolutely love these videos. God willing the awareness you've shed on this matter will engender positive change in the industry
@hectatusbreakfastus610625 күн бұрын
Thank you so much for addressing this. I've really been feeling like modern games just haven't been performing the way games in the past have. I have so many great memories in older games on potato hardware and modern gaming for the past 5 years at least has felt very stale.
@hoyteternal5 ай бұрын
nanite is specificaly made to render non-optimized scenes with ridiculous polycounts, and microgeometry when each pixel might render a separate triangle. and models that don't have lods. it has huge initial overhead but way higher scalability in this scenario. actually nanity is not a unique technology. it is an implementation of a techique called visibility buffer.
@chriszuko5 ай бұрын
As much as I think this is a good topic and the results seem genuinely well done, the solution proposed here to have spent the time and money on an AI solution for LOD creation and a much faster / seamless workflow for optimization... is something companies have been trying to do for years.. so I personally don't think it's a good way to move forward. TO me.. it would seem possible that nanite + lumen can evolve and become much more friendly to meshes produced in an already optimized way and rely way less on lower resolutions. I DO think they probably are pushing it a bit too early still and I also agree that DLSS, TSR, and any other upscaling/reconstruction technology is just not good to lean on. But to your point, companies continue to do so because they can say "look everything runs faster!" so it is hard to envision a future without needing it. A side note.. I don't think your particular attitude is warranted and, in my opinion, makes it much harder to have a constructive conversation on how to move forward. I'm not perfect either in this regard, but as this gains traction it's probably good to dial that back. In the first post for the mega thread for example you show your results and then feel the need to say "Worse without LOD FPS LMAO!". This type of stuff is just all over the place in these threads.. which to me just looks bad and makes it harder to take you and this topic seriously.
@wumi24195 ай бұрын
It ends up being a problem of "who needs to spend money", with choices being developers (on optimizing) or customers (on hardware that can run unoptimized software), and it's obvious which choice saves money for the company. Granted, it might result in lower sales due to higher hardware requirements, but that is a lot harder to prove than lowered dev costs.
@Viscte4 ай бұрын
This is a pretty down-to-earth take.
@exoqqen3 ай бұрын
i agree on the attitude. I'm amazed by the depth of knowledge in these videos but the aggressiveness makes me keep aa distance and be wary. Its not pleasant or professional
@Viscte3 ай бұрын
@@exoqqen the guy comes across as a bitter asshole for some reason. Its important people discuss topics like this but the negativity is definitely off-putting
@jarrodkober2 ай бұрын
Yeah he's comin in a little hot and I'm curious if everything is 100% factual. Some of his points seem credible and compelling. But he started to lose me when he said DLSS is over hyped and only compared to TAA. You can compair Native 4K no AA to DLSS and it's pretty good. We can all agree TAA is too soft but he creates a false dichotomy by quickly making that point (however grounded it may be) then moving on.
@hoyteternal5 ай бұрын
nanite is an implementation of rendering technique called visibility buffer, this technique is specifically created to overcome quad utilisation issues. once triangle density reduces to a single pixel, the better quad utilization of Visibility (nanite) rendering greatly outweighs the additional cost of interpolating vertex attributes and analytically calculating partial derivatives. you can search for an article called "Visibility Buffer Rendering with Material Graphs", it is a good read on filmic worlds website, with lot of testing and illustrations
@ThreatInteractive5 ай бұрын
In the original paper on visibility buffers, the main focus was on bandwidth related performance. Visibility buffers might not be completely out of consideration. We've spoken with some graphic programmers who have stated their implementation can speed up opaque objects, but we are still in the process of exploring the options here. While Nanite is a solution to real issues, it's a poor solution regardless because the cons outweigh the pros. We've seen the paper you've mentioned and we also shown other papers by filmic worlds in out first video ( which discussed more issues with Nanite) .
@devonjuvinall54094 ай бұрын
Great watch! I would also recommend Embark's Example-based texture syntheses video. They get into photogrammetry and their testing on the software for 3D Props. It's just rocks using displacement maps but I think the whole video could be relevant to this situation. I don't know enough to be confident though haha, still learning.
@2feetsandamushroom2 ай бұрын
Had no clue about performance, just how it handles lods for meshes on static objects and reduces notable popin. From my experience it reduces it greatly when built around it. Like Hellblade 2, barely any popin in sight for the entire game which was a breath of fresh air to be honest since lod shifts are one of the more annoying visual blemishes in games. The removal of tessellation is pure madness, why remove functions that we know work and have optimized for years? Like give us options!
@mrtransistor61733 ай бұрын
Its good this is finally being pointed out. However, this shows how most common developers, especially indie ones, don't really understand how rendering actually works. Most can't even create games outside of an engine. Games, as well as software are inexcusably inefficient. Everything is wasted. Memory, CPU, GPU etc. Sure, not every little thing can be optimized, but when common optimization practices are neglected, it all adds up in the end.
@597das3 ай бұрын
great video! if anyone is incentivized and capable of making free topology optimization tooling its AMD. had no idea why the AAA graphics optimizations had become so stagnant until today!
@zudethofficial9579Ай бұрын
as someone who is currently in school for 3d art and unreal generalism this is fucking terrifying
@Gurem5 ай бұрын
I remember epic saying this would not replace traditional methods but should be used in tandem with them as it is a way to increase productivity. Tbh this video taught me more about optimization than any optimization video and didnt waste my time. As an indie it did more to reinforce my desire to use nanite while also teaching me how to do more hands on techniques that while require more work may result in better performance that I can use when I have the free time to do so. I thank you for demistifing the BS as I really couldnt understand the tech from those other yt videos as they were purely surface level quick churned content.
@ProjectFight5 ай бұрын
Okey... a few things. This video was way faster than I could follow. But it was sooo interesting. I love seeing the technical aspect of how games are properly optimize, what counts and what not. And I will ALWAYS support those that are willing to go the extra mile to properly research this things. Sooo, new sub :)
@NeverIsALongTime5 ай бұрын
XD I know Kevin in real life; in real life he speaks way faster! He's chill in his videos. He is brilliant, probably a bit on the spectrum (in a good way). He is more passionate and even more intense in real life. I have read some of his screenplay for his upcoming game it is terrifying & edge-of-your-seat exciting!
@motesquid2 ай бұрын
Don't know what to say really. This was great, want to look at game and engine optimization stuff more now. Really just commenting to support
@yeah72675 ай бұрын
Finally someone taking about this... I was sick ok people claiming Nanites boost performance when in reality I was losing frames even in the most basic scenes
@cheater005 ай бұрын
just another case of tencent timmy being confidently wrong. we've all known unreal engine was way worse than quake 3 back in the day as well, it was ancient by comparison and the performance was abysmal. the fact people kept spinning a yarn that unreal engine was somehow competitive with id tech was always such a funny thing to see.
@HankBaxter5 ай бұрын
And it seems nothing's changed.
@cheater005 ай бұрын
@@randoguy7488 precisely. and in those 25 years, NOTHING has changed. this should make you think.
@ClowdyHowdy5 ай бұрын
To be fair, I don't think nanite is designed at all to boost performance in the most simple scenes, so anybody saying that is either oversimplifying or just wrong. Instead, it was designed to provide an easier pipeline for artists and designers to build complex level designs, without inducing an equivalent increase in performance loss or time into developing LODs. It's designed to be a development performance boost. If you don't need nanite for your games then there's no reason to use it, but I think it's weird to pretend like the issue is because it doesn't do something it wasn't designed to do
@_Romulodovale5 ай бұрын
All the last features in engines came to make developers life easier while affecting performance negatively. Its not a bad thing, in the future those features will be well polished and will help us developers without affecting performance. All good tech take time to be polished.
@cenkercanbulut30695 ай бұрын
Thanks for the video! I appreciate the effort you put into comparing Nanite with traditional optimization methods. However, the full potential of Nanite might not be fully apparent in a test with just a few meshes. Nanite shines when dealing with large-scale environments that have millions of polygons, where it can dynamically optimize the scene in real time. The true strength of Nanite is its ability to manage massive amounts of detail efficiently, which might be less visible in smaller, controlled setups. It would be interesting to see how both approaches perform in a more complex scene with more assets, where Nanite’s real-time optimization could show its advantages. Looking forward to more in-depth comparisons in the future!
@Neosin13 ай бұрын
This is exactly what I was telling people, but no one believed me! I taught 3d modelling at an Aus university 15 years ago and back then we modelled everything by hand, which meant our models were very low poly and optimised! Now days, devs just scan in hundreds of millions of polygon models with 1 click and call it a day and expect the software to do all the optimisation! This is why UE5 games run like garbage!
@henryg67642 ай бұрын
the widespread performance issues reported with STALKER 2 (which uses both nanite and lumen) support these claims. great presentation.
@outlander23412 күн бұрын
I love that you mention Decima, that's why Horizon 2 looks so good. I feel like many people dont even play videogames they just watch and comment on KZbin videos because the first thing that came to mind when playing Horizon 2 after all that UE5 hype... Was like UE5 what? Horizon is still best looking out there. Whetever tech they are using is clearly best.
@GreyDeathVaccine3 ай бұрын
Wow. Even though I'm not a game developer nor the 3d artist (I' backend e-commerce dev), you presented a complex problem in simple enough terms for me to understand and it really intrigued me. You have a talent for imparting knowledge. I've subscribed :-)
@astrea5555 ай бұрын
Incredibly in depth video once again, really inspiring. I'm only dabbling into game dev but this explain so much about what we've started to see in recent games!
@TenaciousDilos5 ай бұрын
I'm not disputing what is shown here, but I've had cases where Nanite in UE 5.1.1 increased framerates 4x (low 20s fps to mid 90s fps) in my projects, and the only difference was turning Nanite on. "Well, there's more to it than that!" of course there is. But Nanite took hours of optimization work and turned it into enabling Nanite on a few meshes.
@forasago5 ай бұрын
@@jcdentonunatco This is not true at all. When they showed the first demo with the pseudo Tomb Raider scene they explicitly said (paraphrasing) that having as many high poly meshes in a scene would be impossible without Nanite. And they definitely weren't talking about "what if you just stopped using LOD" as the thing Nanite is competing with. Nanite was supposed to raise the limits for polycount in scenes, full stop. That equates a claim to improved performance.
@MrSofazocker5 ай бұрын
@@forasago "You can now render way denser meshes" does not equal "so less dense meshes now render faster"
@DeltaNovum5 ай бұрын
Turn off virtual shadow maps and redo your test.
@lokosstratos71925 ай бұрын
@@DeltaNovum YEAH!
@user-bl1lh1xv1s5 ай бұрын
@forasago "unlimited polycount" in scenes does not equate to a claim of improved performance over non-meshlet approaches. It merely states that poly count is not an issue... Which certainly has implications for asset pipelines, where full-quality meshes _could_ be brought into the scene without prior processing (except, of course, the nanite preprocessing step).
@YoutubePizzer5 ай бұрын
Here’s the question though: is this going to save significant enough resources in a development team to allow them to achieve more with less. If it’s not “optimal”, it’s fine, as long as the amount of dev time it saves is worth it. Ultimately, in an ideal world, we want the technology to improve so development can become easier
@wumi24195 ай бұрын
Cost of development doesn't disappear however, it's just transferred to customers. So they will have to either pay for a better GPU, run the game at lower quality, or will just "bounce off" and not purchase the game at all.
@blarghblargh5 ай бұрын
@wumi2419 they could just make the game look worse instead. The development style being done now is already burning something, and that something is developer talent. And there isn't an infinite pool of that. You also ignored that GPU performance increases over time. So it may be a rough tradeoff now, but the tech will continue to get better hardware over time.
@insentia84245 ай бұрын
I don't understand why you ask about whether this allows development teams to be more efficient (achieve more with less), then you believe that in an ideal world tech would make things easier. Something becoming more efficient, and something becoming easier are not the same thing. In fact, an increase in efficiency often times causes things to become harder or more complex to do.
@snark5675 ай бұрын
You can always just go for stylized visuals instead of hyper complex geometry and realism. This performance min maxing only becomes an issue of concern when your game is too detailed and complex but you still want it to run on a toaster.
@seeibe5 ай бұрын
The problem is when newer games look and perform worse than older games while still costing the same. If that saved developer time will be passed on as price reduction to the user, sure. But that's not what happens for most games.
@average_ms-dos_enjoyer5 ай бұрын
It would be interesting to see similar breakdowns/criticisms of the other big 3D engines approaches to visual optimizations (Unity, Crytek, maybe even Godot at this point)
@petar-boshnakovАй бұрын
Your videos are awesime and only confirm what eyes can see. My theory is that it is some sort if truce between hardware devs and software devs that benefits them all... Otherwise HW sales wouldn't go as planned and SW dev wouldn't be as rapid..
@AdamTechTips275 ай бұрын
I've been feeling and saying the same thing for years. Just doesn't have the concrete testing yet. This videos proves it all. thankyou
@MrSongib3 ай бұрын
Thank god that people like you guys exist. I've wondered about the same stuff before, but as we can see, everyone just praising nanite stuff without any testing it whatsoever. xd
@nazar73685 ай бұрын
Epic games marketers killed the fifth version of the engine. at the time of 2018, there were no video cards that supported Mesh Shaders. They took advantage of this and added the illusion of cheating support for old video cards. Lumens and nanites were supported on old video cards, but this is all software implementation, which cannot be at the level of hardware mesh shader and ray tracing. This led to real problems with the engine kernel, namely that DX12 and Vulkan do not work correctly and have low efficiency due to the old code that was written for DX11. I'm not taking into account the problems with the engine's layer rendering and compositing algorithms, because everyone already sees the eternal blurring of the picture and annoying sharpening. This will not be fixed until they add hardware support for mesh shaders and rtx of the new version (added literally by the dxr 1.2 library). For example, Nvidia has many conferences to show their algorithms for improved ReStir in unreal engine 5, and gives access to them, but developers are firmly in their own way and continue to feed the gaming industry with an anthem.
@JoseDiaz-he1nr5 ай бұрын
idk man Black Myth Wukong seems like a huge success and it was made in UE5
@manmansgotmans5 ай бұрын
The push to mesh shaders could have been nvidia's work, as their 2018 gpus supported mesh shaders while amd's gpus did not. And nvidia is known for putting money where it hurts their competitor
@Ronaldo-se3ff5 ай бұрын
@@JoseDiaz-he1nr its performance is horrendous and they reimplemented a lot of tech in-house so that it can run atleast.
@topraktunca18295 ай бұрын
@@JoseDiaz-he1nr yeah they made a lot of hard internal optimizations and in the end its running on a "ehh well enough I guess" level. Not to mention the companies that didn't bother to do such optimizations like Immortals of aveum, Remnant 2. These games are unplayable other than 4090s or 4080s. Even then they still have problems
@nazar73685 ай бұрын
@@JoseDiaz-he1nr A really great success. With a 2010 picture and 30fps on 4090 with a blurry picture
@gameworkerty5 ай бұрын
I would kill for an overdraw view like unreal has in Unity, especially because there is a ton of robust mesh instancing support via unity plugins that unreal doesn't have.
@piroman6655 ай бұрын
Its normal that new generation of rendering techniques introduces large overheads. Its a good tradeoff as it streamlines development and enables more dynamic games, Major issue is that developers ignore good practices and then blame the software, part of that is also epics fault as they try to sell it as magic solution for unlimited triangles which is not. Nanite might be slower but enable scenarios that would be impossible or very hard to achieve with static lighting. Sure you can render dense meshes with traditional methods but imagine lightmapping them or making large levels with realistic lighting and dynamic scenarios.
@SydGhosh5 ай бұрын
Yeah... In terms of all this dude's videos - I find myself technically agreeing; but, I don't think they see the big picture.
@thegreendude20865 ай бұрын
@@piroman665 I believe unreal was made to be somewhat artist friendly, systems you can work with even if you do not have a deep technical understanding. Hitting the "enable nanite" checkbox so you have to worry less about polycount seems to fit that idea.
@lau64385 ай бұрын
@@SydGhosh The bigger picture being deprecating traditional LODs that perform better, to implement a half-baked solution? Wow, what a nice picture.
@DagobertX25 ай бұрын
@@lau6438 They will make it better with time, just like back in the gamedev stoneage they made LOD perform better. Imagine there was a time where you had to optimize triangle strips for a game for best performance 💀
@Bdhdh-p7h4 ай бұрын
@@lau6438 LODs are costly and expensive to studios to develop.
@invertexyz4 ай бұрын
Another part of the issue may be that Nanite wasn't designed exclusively around the Mesh Shaders system on the newer generations of GPUs. They use general compute to be able to support older hardware and the system is designed with having to do it that way in mind. They do have a fast-path using Mesh Shaders for polygons larger than a pixel, but it sounds like it's kind of just throw in there for a little extra performance boost instead of it being an entirely separate path for the whole system to run through.
@the_wobbly_witchАй бұрын
don't mind me i'm just here to learn how to optimize my own game.
@FVMods4 ай бұрын
Minimalist, quality, straight-to-the-point narration, tidy video editing with relevant information, engaging content. Great channel so far!
@TheSocialGamer3 ай бұрын
Very well delivered and interesting topic, great job. The video was well layed out. subbed!
@Lil.Yahmeaner5 ай бұрын
This is exactly how I’ve felt about graphics for years now. Especially at 1080p, all the ghosting, dithering, and shimmering of UE5 gets unbearable at times and everyone is using this engine. It’s like you have to play at 4k to mitigate inherent flaws of the engine but that’s so demanding you have to scale it back down which makes no sense. Especially bad when you’re trying to play at 165hz and most developers are still aiming at barely 30-60fps, now exacerbated by dlss/framegen. Just like all AI, garbage data in, garbage data out, games are too unique and unpredictable to be creating 30+ frames out of thin air. Love the videos, very informative and well spoken. Keep up the good fight!
@nikch15 ай бұрын
> "all the ghosting, dithering, and shimmering" Instant uninstall from me. Bad experience.
@Khazar3215 ай бұрын
For years now? Yeah I don't think that's UE5 mate. Maybe stop hopping on the misinformation train here...
@s1ndrome1175 ай бұрын
@@Khazar321 you'd understand if you ever use unreal for yourself
@MrSofazocker5 ай бұрын
The notion that these defaults cannot be changed, and it's somehow a systemic issue of Unreal is insane to me. If you just use smth you don't even understand leave everything on default on press a button "make game", what kind of optimization do you expect?
@Khazar3215 ай бұрын
@@s1ndrome117 I did and I have also seen the train wrecks that lazy devs cause with UE4 and engines around the same time. Horrible stutters, bad AA options, blurry and grey(unless you fix it yourself with HDR/ReShade), shader issues, etc. So yeah tell me how horrible UE5 is and what lazy devs can do wrong with it. I have seen it all in 30 years of gaming.
@QuakeProBro5 ай бұрын
Great video, you've talked about many things that really bother me when working with UE5. Extreme ghosting, noise, flickering, a very blurry image, and compared to Unreal Engine 4, much much worse performance with practically emtpy scenes (sometimes up to 80 fps difference on my 2080ti). All this fancy tech, while great for cinematics and film, introduces so many unnessessary problems for games and Epic seem to simply not care. If they really want us to focus on art, instead on optimizing, give us next-gen worthy and automated optimization tools instead of upscalers and denoisers that destroy the image for a "better" experience. This is only battling the symptomes. And don't get me wrong, I find Lumen and Nanite fascinating, but they just don't keep what was promised (yet). Thanks for talking about this!
@keatonwastaken5 ай бұрын
UE4 is still used and is capable, UE5 is more just for people who want fancier looks early on.
@AshrindyAnimates3 ай бұрын
THANK YOU, I'VE BEEN ARGUING WITH MY FRIENDS ABOUT THIS FOR MORE THAN A YEAR
@therealvbw5 ай бұрын
Glad people are talking about these things. You hear lots of chatter about fancy UE features and optimisations, while games get slower and look no better.
@ThatTechGuy1234 ай бұрын
Thanks for the information. I'll keep these things in mind as I develop.
@legice5 ай бұрын
Finally a video that talks about nanite! Honestly, nanite saved a project, because we had 100 meshes with 20mil poly each and nanite made it work on machines that had no right to be able to, but… It is in no way a silver bullet and the day to day use, as a quick LOD it is not. As a modeler, there are rules to modeling and if you do it right, you need a day max to optimise a big ass mesh, which you know how, because you made it! “Quick” and dirty modeling exists and optimisation down the road, but when you are making the prop, you KNOW or at least understand what and how to do it, for it to be the least destructive. Non destructive modeling exists, but it brings in different problems, such as time, approach, workflow and unless it requires it, you dont use it, as its a different beast all together. You can model a gun any way you want, but a trash can, house, something non hero, non changing and that has measurements set in stone, you do it the old fashion way. Texture and prop batching is simple, but being good at it is not. I love lumen, but it is clearly still in the early stages and needs additional work to be optimized for non nanite and optimized workflows. Im just so happy I wasnt the only one going insane about this
@XCanG5 ай бұрын
I have a comment above with my opinion on this, but I'm not working in game development, so my knowledge is limited. Considering that you are modeler I have a few questions to you: 1. How long does it take to take a model for nanite vs LODs? 2. How many years you work/have experience as modeler? It's for the sake that pro's making stuff way quicker, so that difference one vs another may vary depending on experience. 3. How much you are aware of the optimizations mentioned in the video? My opinion is that he have at least 6 years of experience and probably already some Senior gamedev, but it's hard to imagine that new gamedevs would have that knowledge. 4. Do you think nanite is useful right now? Do you think it will be useful in the future? (may be with some polishing and fixes)
@legice5 ай бұрын
@@XCanG Sure, I can answer those - nanite is obviously just a click, so nothing to do there really. As I said, when modeling, you ALWAYS plan ahead, so when you are doing retopo, its just a matter of how well you preplanned all your steps beforehand. So as there are rules to modeling, there are good practices in place for a long time and if you follow them, you are going to take a bit longer to make a prop, but when doing retopo/LODs, its going to take only a fraction of the time needed. Cant really time this, because its how you should be modeling regardless, unless you are doing for visualisation or movies only, where they dont need to be optimized. - professional, none really, as everything I have done is on personal works and game jams, due to the game dev industry being a bitch to really get in. There are steps or approaches, but in the end it dosent matter, as long as you deem it to be the best approach time wise, modeling wise and within budget, but most have the same workflow, because if you hand it off to somebody or leave the company, other need to be able to take your work and adapt/finish it. - very little/barely understood anything, because I learned from doing it and adapted my workflow base on the information I got, while searching for solutions. Honestly, you can basically skip anything he said, because that is all theory in the same way of how light works in games. As a modeler, you dont really need to know exactly how it works, but you get a feeling and slight understanding of how any why. He is going way more into technical stuff, something tech artists and programers deal. As a senior modeler, you touch this, but in the end your job is to do other things, such as modeling well, texture packing, instances, draw calls, modular design... in some areas and studios, this gets mixed in between and I 100% guarantee you, that most dont know it, even seniors, but they compensate in other areas. - nanite is already useful, it will be more useful, but limited. The fact is, nanite is a constantly active process going on in the screen, where as LODs are a one and done. LODs will never go away, but their dependency will be reduced, as less and less optimization will be needed to make games work so well, because computers are getting better. As I said, nanite straight up saved a project of ours when it was still in alpha/beta, so now if you use it for trying out how something looks in game, stress testing a rig pre optimization or whatever, it has its place, but should not be overused and has limitations. You cant use it on an skeleton/rigged model for example, as it relies on a constistent poly mesh. Take everything I said with a grain of salt. The video explained everything BEAUTIFULLY and I understand things I have unknowingly been doing for years, but never really grasped why, but I knew it worked. I learned things my way, studios teach their way, opinions clash and in the end, nobody really knows what they are doing, only how they feel they should be done and the final result dictates that.
@xenolit3027Ай бұрын
As the processing power of GPUs improve, performance cost is worth to the labor cost that has to be put in to do optimization. Like authoring efficient handmade LODs and doing re topology is still very costly. I say with Nanite, devs get to work on aspects of the game that matters most, letting the magic of optimization to Nanite. Even if it's at a performance cost. Thats how the industry improves, balancing labor with automation and performance cost.
@julianolotero6600Ай бұрын
I think it is important to explore what he refers to as the current standard methods, but I also get his point that if we let all these old methods atrophy, we are losing important methods that still have a place in modern graphics. Threat Interactive is a lot more anti-nanite than me, but maybe he has to be a contrarion to get the attention the problem deserves.
@B.M.Skyforest4 ай бұрын
What many seem to forget is that DLSS and other upscaling things are meant for games to run on slow hardware. And now it's required to be ON if you want to have nice framerate on your top notch PC at ultra settings. It always makes me laugh and also sad at the same time seeing 2010s level of graphics with barely 30 fps on modern machines. We had better looking and faster games back in the day.
@fndrn8Ай бұрын
i went from a 1080ti to a 3090. I was playing cyberpunk 2077 and thought double the performance would mean that i could play in 4k. It did but barely. it just does a bit better in 4k. now i know why. i was about to pull the trigger on a 4080 upgrade but im affraid it just wouldnt be worth it if the dev's keep doing this crap. you would think the console devs would want to have these optimized options but i get it. just increases the cost. end rant lol
@Dom-zy1qy5 ай бұрын
I am so glad I clicked on this video. Im a noobie when it comes to graphics programming, but I've learned quite a lot just hearing you talk about things. I didn't know shaders are ran on quads, I thought shaders were per-pixel. Maybe the reason behind this relates to the architecture of the gpus? Some kinda SIMD action going on?
@jcm26062 ай бұрын
It's because of how texture samples with automatic mipmap selection works. To figure out what mip level you need to sample from, you need to figure out how far away the sampling points are between adjacent pixels on the screen. Ideally you want as close to a 1:1 mapping between pixels on the screen and texels sampled from the texture, so that moving one pixel across on the screen moves as close to one texel across on the texture. Since pixels are ran on quads, this process is basically free. Pixels in a quad are executed in lockstep with each other within the same SM/CU on the GPU (alongside the other threads within their thread group, though that's getting a bit into the weeds), and so each pixel should be able to access the data of all other pixels within the same quad, assuming they've taken the same branch through a uniform control flow region (fancy way of saying they're in the same section of code). This lets the GPU access the texture coordinates of all four pixels in the quad at the same time, without any additional work. The GPU can use the texture coordinates to figure out how pixels on the screen map to texels sampled from the texture, so that it can calculate the optimal mip level to sample from.
@DeeOdzta5 ай бұрын
Amazing work and research thank you for sharing this, many devs I know have had similar opinions of Nanite.
@Mattheouw3 ай бұрын
thanks for informing people on this subject.
@ADRENELINEDUDE4 ай бұрын
THANK YOU! Finally a proper, logical, grounded video! Your channel is amazing.
@GonziHere5 ай бұрын
Interesting video, but isn't nanite using it's own software solution for the small triangles exactly because of that issue? I'm pretty sure that they've said so when they started talking about it, it's in their tech talks, etc. This test feels somewhat fishy to me. Why not compare the normal scene, instead of captured frame, for example? That skips part of the work of one example, while using the full pipeline of the other...
@AdamKiraly_3d5 ай бұрын
I would love to get my hands on that test scene to give it a spin. I've been working with Nanite in a AA and AAA setting for almost 3 years now, and while it was an absolute pain to figure out the quirks and the correct workflows it has been overall a positive thing for production. In my experience Nanite REALLY struggles with anything using masked materials or any sort of Pixel Depth edits. I've also seen issues with perf when most meshes were "low poly" in the sense that the triangles are very large on screen, I vaguely remember the siggraph talk where they mention that larger triangles can rasterise slower because it's using a different raster path Nanite handling its own instancing and draws moves a lot of the cost off the cpu onto the GPU so the base cost being more should be a surprise to no one. It is also very VERY resolution dependent. The higher you go in res the exponential the cost of Nanite (and VSM, Lumen) not to mention the general cost increase of a larger res. I've grown to accept the only way forward with these new bits of tech is upscaling. I'm not happy as a dev, but as a gamer I couldn't care less and use it everywhere. VSM has similar issues, since for shadows you effectively re-render the nanite scene, but there are ways to optimise that with CVars, and generally it's been better performing than traditional shadow maps would when used with nanite - there's a caveat there, since if you bring foliage into the mix then nanite can get silly expensive both in view and in shadows. Collision memory has also been a major concern since for complex collision UE uses the Nanite fallback mesh by default, so in a completely Nanite scene you can end up with more collision mem than static mesh memory I also feel like having a go at Epic for not maintaining "legacy" feature compatibility indefinitely is a bit unfair. Both VSM and Lumen rely on Nanite to render efficiently and were created as a package. Epic decided that is the direction they want to take the engine, an engine that is as complex as it's large, it is almost expected to lose some things along the way. That being said I have run into many things that I wish they cared enough about to fix (static lighting for example has a ton of issues that will never be fixed cause "just use dynamic lighting"), but at the same time I won't start having a go at them for not supporting my very specific usecase that is not following their guidance. No part of this tech is perfect and I get the frustration, but it did unlock a level of quality for everyone to use that was only really available in proprietary engines before. Also do we really thing CPPR would drop their own insane engine for shits and giggles if they didn't think it was a better investment to switch to UE5 than to upgrade their own tech? same with many other AAA companies, and you can bet your ass they took their sweet time to evaluate if it makes sense or not (not to mention they will inevitably contribute to the engine a TON that will make it back to Main eventually)
@ThreatInteractive5 ай бұрын
re: Why not compare the normal scene, instead of captured frame, for example? You can compare yourself because we already analyzed the "normal" scene, same hardware, same resolution: kzbin.info/www/bejne/ipacqYiEqrdgi5I Watch the rest of our videos as everything we speak about is interconnected.
@satibel5 ай бұрын
@@SioxerNikita imo graphics don't matter is more of a "yeah realistic graphics are neat, but consistent graphics are better" I'll take cartoony graphics like Kirby's epic yarn over battlefield graphics. yes realism looks good, but stylized does too, and a well stylized game can both be way more efficient and age way better, while still looking as good or better than wannabe photorealistic graphics. If we're talking about ps1, a game like vib ribbon still looks good nowadays, and for a more well known example, crash bandicoot. They look miles better than asset flip looking franchise games that have realistic graphics, but aren't that cohesive and have a meh game under.
@lycanthoss5 ай бұрын
@@satibelRealistic graphics clearly sell. Just look at Black Myth Wukong.
@satibel5 ай бұрын
@@lycanthoss they do, but I'd argue for BMW it's just as much the gameplay and character design, because look at undawn. Graphics will make people look at the game, but I don't think they make it sell by themselves.
@rimuruslimeball5 ай бұрын
These videos are amazing but to be honest, a lot of it flies over my head. What should we, as developers (especially indie), do to ensure we're doing good optimization practices? Alot of what your videos discuss seem to require an enormous amount of deep-level technical understanding of GPUs that I don't think many of us can realistically obtain. I'm very interested but not very sure where to start nor where to go from there. I'm sure I'm not the only one.
@cadmanfox5 ай бұрын
I think it is worth learning how it all works, there are lots of free resources you can use
@marcelenderle49045 ай бұрын
As an Indie developer I feel that it's very important to know the problems and limitations of those techs and the concepts behind good practices. That doesn't necessary mean you have to apply them. Nanite, lumen, dlss etc can be very efficient as a cheap solution. If it speeds up your game by alot and gets to the result you want, for me at least, it's what you should aim for. Those critiques at Unreal are great for Studios and the industry itself.
@Vysair5 ай бұрын
I have a diploma in IT which is actually just CS and I dont have a clue on what this guy is talking about as well
@JorgetePanete5 ай бұрын
A lot*
@Vadymaus5 ай бұрын
@@anonymousalexander6005. Bullshit. This is just basic graphical rendering terminology.
@SpookySkeleton7385 ай бұрын
you can also reduce draw calls using bindless techniques, like what they did in idtech 7, they are able to draw the entire scene with just a handful of draw calls.
@mitsuhh5 ай бұрын
What's a bindless technique?
@SpookySkeleton7384 ай бұрын
@@mitsuhh with vulkan (and i believe also opengl 4.6), you can bind textures to a sparse descriptor array in your shaders that can be modified after pipeline creation, effectively allowing you to swap in and out textures on the fly. you can then put all your material data in a shader storage buffer, and use a vertex attribute to index into that shader storage buffer which can then index into your texture array, it basically means that you don't have to bind new textures when rendering meshes that use different textures, so long as they are on the same shader, which can reduce a TON of command buffer recording and submission in your gpu pipeline. "bindless" is technically a misnomer, since obviously there are still samplers being bound to a descriptor, but it's right insofar as you don't have to "rebind" them unless you are loading new textures in.
@mitsuhh4 ай бұрын
@@SpookySkeleton738 Cool
@kitsune06893 ай бұрын
the main problem is that foliage with traditional lods look really bad unless its stylized like botw or genshin or pokemon etc. Its probably not a performance upgrade but it solves the biggest immersion killer problem in games with open world/big zones: Pop-in. If youre traveling at high speeds foliage looks awful. And looking at far distances looks awful because of billboards. Its probably not the be all end all solution, but with anything that has leaves id say its worth the cost.
@ArtofWEZ5 ай бұрын
I see Nanite like blueprints. Blueprints run slower that pure C++ and Nanite runs slower than traditional meshes, but both of them are lot more fun to work with than traditional ways.
@YaaSalty...0_o2 ай бұрын
God I hope you figure this out. The gaming industry deserves so much better. It's been stagnated with Greed for like a decade. Honestly amazing things should be happening in the gaming industry and the only thing amazing about it is microtransaction profits. Man..
@grzes8489092 ай бұрын
Greed and laziness. It's always those 2.
@depracated3 ай бұрын
Really surprised people weren't more skeptical right off the bat. Knew this was gunna fuck the industry from the start.
@zsigmondforianszabo46985 ай бұрын
I'd rather think about Nanite as a magic wand for those who don't want to deal with mesh optimization and just want a consistent performance all across without manual optimization. This currently hits us heavily but as soon as technology evolves and everyone is going to have access to modern hardware that can utilize this system, the ease of the system and the decent performance will overcome these hardnesses. About the development: 4050 compared to 1060 has a 70% performance uplift in 7 years. 10% every year including hardware and software development and in 5 years nanite will work out really well for fast game development and consistent performance. PS: we need to mandate gamedevs when releasing a trailer to give performance statistics about the ingame scene and upscaling used :DD
@gurujoe755 ай бұрын
I'm not a programmer but I see that for ten years there has been a waiting for a big graphic paradigm. Goodbye classic renderpipeline as they teach it in school, goodbye rasterization, goodbye classic extremely inflexible polygons, goodbye endless problems with shadows, LOD, UW mapping, etc. etc. etc. etc. etc. etc. UE is the only big multiplat engine today. And R@D is extremely expensive. You understand what I'm implying here between the lines.
@MyAmazingUsername5 ай бұрын
12:20 Finally someone who doesn't gasm over AMDs FSR upscaling. FSR looks like a pretty basic resizer without good details. Thank you for bringing up these Nanite issues. I had no idea it was so inefficient.
@forasago5 ай бұрын
Yes! So sick of people who notice the flaws with nvidia or Intel and then go full cult mode over AMD. Just because a company is smaller than its direct competitors doesn't make it good, be it in quality or in ethics or anything else. This "cheering for the underdog" is embarrassing.
@DimosasQuestАй бұрын
I work as a VR dev, and performance is our N1 goal, before visual fidelity. It is shocking sometimes that with a lot of effort how great looking games you can make, that run really well when using good traditional methods. These however take a lot of time. We've experimented with Lumen and Nanite, and quickly realized what a shit show those 2 are for VR performance, and to boot, the visual artefacts they create can actually contribute to VR nausea. Short of creating our own engine, we've moved back to Unity for now, because it is a much more flexible platform to work with.
@pchris5 ай бұрын
I think easy, automatic optimizations that are less effective than manual ones still offer some value. When a studio has to dedicate fewer resources to technical things like they, the faster they can make games, even if they look slightly worse than they could if they wanted to take absolutely full advantage of the hardware. Every other app on your phone being over 100mb for what is basically a glorified web page should example how dirty but easy optimization and faster hardware mostly just enables faster and cheaper development by enabling developers to be a little sloppy.
@theultimateevil34305 ай бұрын
it's great in theory, but in practice the development is still expensive as hell and the quality of the products is an absolute trash. It's the reason we have a volume control in Windows lagging for a whole second before opening up. The same stuff that worked fine on Windows 95, lags now. Dumbasses with cheap technology still make bad products for the same price.
@pchris5 ай бұрын
@@theultimateevil3430 when you're looking at large products make by massive publicly traded corporations you should never expect any cost savings to get passed on to the consumer. I'm mostly talking about indies. The cheaper and easier it is to make something, the lower the bar of entry is, and the more you'll see small groups stepping in and competing with the massive selfish corps.
@KaRuNaRuGa3 ай бұрын
Shout-out to the KJ for bringing this to light 😂😂😂 mans getting shit on for showing the truth
@happydappyman2 ай бұрын
We've been spoiled by blazing fast hardware to the point that we're now getting games that look worse AND run worse than their predecessors. "It's fine, everyone will be playing on at least a 3070 anyway".
@baronsengir187Ай бұрын
What games are you all playing 🤣
@kaimaiitiАй бұрын
3070 doesn't have enough vram to play Indiana Jones on anything above low settings at 1440p 😂
@happydappymanАй бұрын
@@kaimaiiti yep, there it is. Minimum specs is now a 3070 lol
@kaimaiitiАй бұрын
@BlackParade01 3060 has more vram than a 3070... that is the problem: kzbin.info/www/bejne/rpPZqaKeiZVmaKc
@LuckyFortunes-b3qАй бұрын
you don't need many polygons. You tell the 3D modeler to do retopology then bake the normal maps.
@ThreatInteractiveАй бұрын
Yeah, there is a major deficiency in 3D modeling education and in-engine workflows.
@stephaneduhamel77065 ай бұрын
The point of nanite was never to increase performance compared to a perfecty optimized mesh with LODs. It is made to allow devs to discard LODs for a reasonably low perfomance hit.
@doltBmB5 ай бұрын
if losing 60% fps is "reasonably low" to you
@stephaneduhamel77065 ай бұрын
@@doltBmB it's a lot less than that in most real life use cases.
@doltBmB5 ай бұрын
@@stephaneduhamel7706 Yeah it might be as low as 40%, real great
@Harut.V5 ай бұрын
In other words, they are shifting the cost from devs to hardware (consumers, console producers)
@ben.pueschel5 ай бұрын
@@Harut.V that's how progress works, genius.
@mikeylewark4567Ай бұрын
This video explains why performance for fortnite went to shit, I have been trying for a long time to figure out why.
@thediscreteboys33155 ай бұрын
You are fighting the good fight man. I just wish more people would sit down and learn about this stuff.
@Crimson_StriderАй бұрын
A bit worried. My favorite fps game Squad is switching to ue5. Its going to be unplayable