PLEASE READ FOR UPDATES & RESPONSES: Thank you so much for your support! 1. As always, watch our videos in 4k (as in streaming settings) to see our comparison details through KZbin compression. 2. Please remember to *subscribe* so we can *socially compete* with leading tech influencers who push poor technology on to everyday consumers. Help us spread REAL data, empowering consumers to push back against studios that blame your sufficient hardware! * RESPONSE TO COMMUNITY QUESTIONS: 1. We've repeatedly seen comments attempting to explain how Nanite works; arguing that quad overdraw isn't relevant. That's the whole point of the video. Comparing Nanite (which doesn't use quads) to Overdraw is the only contextually fair comparison. Additionally many have claimed that Nanite has a "large but flat and consistent cost". This is utterly false. Nanite can and does suffer from its own form of overdraw (though not quad related). A major issue that people are missing involves Virtual Shadow Maps. Which are tied to Nanite. Nanite's shadow method not only re-renders your digital scenes at massive resolutions but these maps are also re-drawn under basic scenarios typical in games. Such as moving the CAMERA's position, shifting the SUN/MOON, or having moving objects or characters spread across your scene. Does that SOUND like good performance to you? News flash…It's not. Even Epic Games admitted VSMs were terrible for Fortnite but instead accepting it wasn't fundamentally a good fit. They "bit the bullet" and use it anyway. But they didn't really bite anything...consumers did. 2. To those defending Nanite because it saves on development time. We are fully aware of that. We have constantly stated this in previous videos and comments. We have also said this is a great thing to work towards. What these ignorant people fail to grasp is that Nanite is a FORCED alternative, due to a workflow deficiency in legitimate optimization for meshes. 3. Like we stated in our ‘Fake Optimization Video’, Pro-Nanite users fail to recognize the CONTRADICTION Nanite causes in "visual fidelity". If you are using a technology that has such a massive domino effect on performance that you end up having to use a blurry, detail-crushing temporal upscaler to fix performance then you end up smearing all the detail anyway for a distorted presentation. Then if you were to explore CHEAP deferred MSAA options. All that subpixel detail possible in Nanite and VSM's gross use of soft shadow sampling is promoting temporal aliasing/reliance on flawed TAA/SS. 4: The test shown at 3:36 shows a workflow deficiency rather than an implementation issue. Unreal *does support per-instance LOD selection but the engine defaults to ISM(Instanced Static Meshes) which doesn't support LODs.* But UE5's HISM(Hierarchical Instanced Static Meshes) does but the developers have not made this as accessible and have not produced a system that combines all these meshes with precomputed asset separation culling. Before some people complain about "duplicated" assets and increased file size, we encourage viewers to research how Spider-Man PS4's open worlds where managed.
@HANAKINskywoker3 ай бұрын
6:16 wdym non existing problem? The problem is that dlss doesn't hurt the performance as much as if you were playing the game at higher resolution/using ssaa yet the quality is still good Please think more before saying stuff like that lmao
@QuakeProBro3 ай бұрын
@@HANAKINskywoker "...to a problem that never NEEDED TO EXIST." is what he said. We would not need DLSS that badly, if fundamental optimization methods would be pushed by the industry, instead of upscalers.
@QuakeProBro3 ай бұрын
@@HANAKINskywoker Yes it is true, as the games get bigger and bigger, optimization gets harder. But this is why things like the proposed AI tools would really shine. As long as Nanite is like "You can now render a rock with 10 million tris and the performance is (in context of our other new features) better than before, but it really isn't, because the base cost is exponentially higher. But hey, here is the magic fix: Upscalers!" and not "You can now render 10 million rocks with near infinite draw distance, and the base cost is the same or even less than before.", it creates a problem that shouldn't exist. There is much more work that needs to be done and Epic should provide support for those who want a smooth transition, but instead they only really push the features that sound great in marketing. My own project went from 120fps in UE4 to about 40fps in UE5 on a map that only has a fucking cube and floor. Upscalers are definitely not evil and can be super useful, but they just hurt the visuals too much for them being the only answer you get, if the performance is bad.
@V1vil3 ай бұрын
Comparisons details were still noticeable while watching on Xbox 360 in 720p. :)
@Navhkrin3 ай бұрын
@@RandoKZbinAccount Google -> unreal.ShadowCacheInvalidationBehavior
@pyrus28142 ай бұрын
DLSS was originally conceived as a means to make real-time raytracing possible. It's sad to see how many games today rely on it for stable frames with mere rasterization.
@gudeandi2 ай бұрын
tbh 99% of all players (especially in single player) won't see a difference.. and that's the point. Get 90% of the result by investing 10%. The last 10% cost you sooo much more and isn't often worth it. imo
@beetheimmortal2 ай бұрын
I absolutely hate the way modern rendering went. They switched from Forward to Deferred, but then broke anti-aliasing completely, introduced TAA which sucks, then introduced DLSS, which is now MANDATORY in any game to run at a sort-of acceptable framerate. Nothing is optimized, and everything is blurry and low-res. Truly pathetic.
@griffin57342 ай бұрын
No no no completly wrong. DLSS brought the BEST AA in decades. DLAA. Best tech in the world.
@dawienel11422 ай бұрын
@@beetheimmortal agreed, can't believe that high end RTX40 series GPU's really struggle to run the latest games at acceptable settings and performance especially compared to the hardware we had with 2010-2015 games which all generally looked good and run well on the hardware of their time. I feel like we are at the point of gaining very slight graphical fidelity for way too much cost these days.
@techjunky98632 ай бұрын
@@beetheimmortal only way to play games with proper resolution now is to buy a 4k monitor and render at 4k. Then you get somewhat a similar image quality as we had with foreward rendering
@BunkerSquirrel2 ай бұрын
2d artists: worried about ai taking their jobs 3d artists: worried ai will never be able to perform topology optimization
@とふこ2 ай бұрын
Me: want a robot to take my job 😂
@GeneralKenobi694202 ай бұрын
Who let the furry out of the basement 💀
@dimmArtist2 ай бұрын
3d artists worried that hungry 2d artists will become better 3d artists and taking their jobs
@mainaccount8882 ай бұрын
@@dimmArtistsaid no one ever
@roilo85602 ай бұрын
@@GeneralKenobi69420bro has 69420 in his name in 2024 💀
@SaltHuman3 ай бұрын
Threat Interactive did not kill himself
@ook_3D3 ай бұрын
For real, dude presents incredible information without bias, gotta piss alot of AAA companies off
@bublybublybubly3 ай бұрын
@@ook_3D Epic and some twitter nerds may not be happy. I don't think the game companies care 🤷♀ He might just bring them a different solution for saving money on dev time & optimizations. Why would they be mad about someone's who is this motivated to give them another option in their corporate oppression toolbox for free?
@168original73 ай бұрын
Timmy isn’t that bad lol
@mercai3 ай бұрын
@@bublybublybubly Cause this someone acts like a raging asshat, makes lots of factually wrong claims, offers no solution and then tries to crowdfund to "fix" something - all from this place of being a literal nobody with zero actual experience or influence? Yeah, it's not maddening, but quite annoying.
@gdog81702 ай бұрын
I can't lie this is not funny, doesn't mean that he is revealing info like this that he will be targeted, hopefully that never happens
@sgredsch2 ай бұрын
im a mod game dev that worked with the source engine. and looking back how we optimized for performance manually with every mesh, texture, shader, and seeing how modern studios deliver the worst garbage that runs like a brick also makes me angry. now we start throwing upscaling at games that should run 2x as fast on native resolution to begin with. we are subsidising the sloppy or not existing game optimization by overspending on overbuilt hardware that in return gets choked to death by terrible software that is a result of cost cutting measures / lazyness / manipulation. nvidia is really talented in finding non-issues, bloating them up and then selling a proprietary solution to an issue that shouldnt be one in the first place. they did it with physX, tessellation, gameworks, raytracing and upscaling. the best partner in crime is of course the one engine vendor with the biggest market share - epic with their unreal engine. fun fact: the overdraw bloat issue goes back to when nvidia forced tessellation in everyones face - nvidia made their gpus explicibly tolerant to sub pixel geometry spam (starting with thermi, i mean fermi), while GCN couldnt handle that abuse. the witcher 3 tessellation x64 hairworks sends its regards, absolutely choking the r9 290X. its a shame what he have come to.
@e.s.r58092 ай бұрын
I've got tired of seeing "if it's slow, your hardware isn't good enough" for graphics on par with decade-old releases- struggling on the same tech that ran those games like butter. You shouldn't *need* to spend half a year's rent on a gaming PC, to compensate for memory leaks and poor optimisation. The only winners are studio execs and tech shareholders. It's convenient to call your customers poor, instead of giving your developers time and resources to ship viable products.
@googIesux2 ай бұрын
Underrated comment. This has been the elephant in the room for so long
@T61APL892 ай бұрын
and yet people still buy the games in record numbers, this is what capitalism rewards. throwing shit at the wall and accepting the shit smeared remnants.
@Online-j8e2 ай бұрын
"starting with thermi" lol thats funny
@cptairwolf2 ай бұрын
You're not wrong in that too many studios are taking the easiest way out and skipping any sort of optimization but let's not blame new technology for they. I'd take well optimized nanite structures or micro polygon tech over LODs any day. LODs are time consuming to create, massively increase storage requirements and just plain look ugly. I'm not sad to see them get phased out.
@sebbbi22 ай бұрын
Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times. While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs). It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render. However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We werent. I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.
@user-bl1lh1xv1s2 ай бұрын
thanks for the insight
@tubaeseries57052 ай бұрын
the issue is that quad overdraw is not a big deal, modern GPUs are never limited by the amount of triangles they output, it's always shaders, and nanite adds a lot of additional work to the shader pipeline, which is already occupied as hell, for standard graphics with reasonable triangle counts nanite just doesn't make any sense, it offers better fidelity than standard methods, but performance is not what it can offer
@minotaursgamezone2 ай бұрын
I am confused 💀💀💀
@torginus2 ай бұрын
Not an unreal expert, but from what I know of graphics, quad rasterization is unavoidable, since you need the derivatives for the pixel shader varyings that are needed for things like texture sampling. Honestly it might make sense to move beyond triangles to things like implicit surface rendering (think drawing that NURBS stuff directly) for the stuff nanite tries to accomplish.
@tubaeseries57052 ай бұрын
@@torginus rendering nurbs and other non-primitive types always goes down to rendering primitives anyway, CAD software always processed nurbs into triangle meshes using various methods that produce a lot of overhead, GPUs are not capable of efficiently rendering anything else than primitives, we would need to build new hardware standard for rendering them, and that's not really reasonable
@unrealcreation072 ай бұрын
10 years Unreal developer here. I've never really been into deep low-level rendering stuff, but I just discovered your channel and I'm glad your videos answered some long-lasting questions I had, like "why is this always ugly and/or blurry, no matter the options I select?? (or I drop to 20fps)". With time, i have developed a kind of 6th sense about which checkbox will visually destroy my game, or which option I should uncheck to "fix" some horrible glitches... but I still don't know why most of the time. In many cases, I just resign and have to choose which scenario I prefer being ugly, as I can never get a nice result in all situations. And it's kind of frustrating to have to constantly choose between very imperfect solutions or workarounds. I really hope you'll make standards change!
@fleaspoonАй бұрын
you can also learn what those checkboxes actually do and solve the issues by yourself for your specific needs
@samnwakefield20327 күн бұрын
He wont be targeted by NO ONE..he is speaking facts and the trickery that gaming platforms use...to trick us into buying more expensive hardwares....keep up the good work kid...you are doing good..and no worries about anyone.
@sideswipebl3 ай бұрын
No wonder fidelity development seems so slow since around 2016. It's getting harder and harder to tell by looking how old games are because we already figured out idealized hyper-realism around 2010 and have just been floundering since.
@MrGamelover233 ай бұрын
Yeah, imagine the optimization of back then with ray tracing. It might actually be playable then.
@metacob3 ай бұрын
GPU performance is still rising exponentially, but I literally can't tell a game made today from one made 5 years ago. As a kid my first console was a SNES, my second one an N64. That was THE generational leap. The closest to that experience we got in the last decade was VR, but that still wasn't quite the same. To be honest though, it's fine. I've heard the word "realism" a few too many times in my life. Now it's time for gameplay and style.
@Fearzzy3 ай бұрын
@@metacob if u want too look at the potential of todays games just look at bodycam game, you wont be making that 5 years ago. But i see your point, RDR2 is still the most beautiful game i've played (other than faces etc.), but its more to its style and attention to detail rather than "raw graphics"
@arkgaharandan58813 ай бұрын
@@metacob well, i was playing just cause 3 recently, the lighting has some ambient occlusion and reflections so it looks bad compared to modern standards, you can see the foliage lod appearing in real time and unless you are running in 4k the antialiasing it has smaa 2x is not good enough to hide the countless jaggies. Id say a better comparison is with 2018 games, also the 4060 is barely better tan the 3060 with more ram, low to mid range needs to improve alot.
@slyseal20913 ай бұрын
@@arkgaharandan5881 Just cause 3 is 9 years old. "5 years ago" is 2019 buddy
@doltBmB2 ай бұрын
Key insight every optimizer should know: draw calls are not expensive, context-switching is. The more resources that a draw call shares with adjacent draw calls the cheaper it is to switch to it. If all draw calls share the same resources it's free(!). Don't worry about draw calls, worry about the order of the draw calls.
@h0bby232 ай бұрын
There is the issue of cpu bound to send work with few polygons drawcall but otherwise on modern hardware yes
@BlueBeam10Ай бұрын
So what you say is, if I spawn 10.000 of the same chairs, I don't even need to have them as instanced, or to have them merged into one mesh because the 10k draw calls will equate to one? I don't know man, I tend to disbelieve that...
@doltBmBАй бұрын
@@BlueBeam10 theoretically if each chair can be rendered in a single drawcall, I guess, which would be a very simple chair. instancing is good for more complex renders.
@BlueBeam10Ай бұрын
@@doltBmB But I though draw calls would happen for each non instanced mesh regardless of the complexity right? Since that chair isn't instanced, why would the engine assume the same draw call can apply to those other meshes?
@doltBmBАй бұрын
@@BlueBeam10 the engine doesn't assume anything, most engines are designed with 0 regard for these facts. it is up to the graphics programmer to batch their drawcalls appropriately. most engines implement some static batching at best which is an absolutely ancient way to do batching, requires some complicated preprocessing and eats bandwidth and memory
@MondoMurderface2 ай бұрын
Nanite isn't and shouldn't be a game developer tool. It is for movie and TV production. Unreal should be honest about this.
@Rev0verDrive2 ай бұрын
Been saying this since day one release w/testing.
@marcinnawrocki14372 ай бұрын
Most of new "marketing buzzword friendly" stuff in unreal is not for games. If you as game dev want average steam PC run your game you will not use those new flashy systems.
@Ruleta_23Ай бұрын
Games are using nanite, don't talk if you don't know the topic, what you say is absurd anyway.
@0oskАй бұрын
@@Ruleta_23 They didn't say it wasn't being used in games.
@Rev0verDriveАй бұрын
@@Ruleta_23 Every game I seen released with it runs like shit on highend systems. You have to use DLS/FSR to get 60-70FPS
@annekedebruyn77979 күн бұрын
Not supporting instancing is horrifying to hear. We do that even for traditional renders.
@gandev52853 ай бұрын
This may or may not be true, but I actually believe Unreal Engine's quad-overdraw viewmode has been broken for a long time (atleast beyond 4.21). In a Robo Recall talk they talk about the impact of AA on quad overdraw and show that if you enable MSAA your quad overdraw gets significantly worse. Now if you run the exact same test quad overdraw IMPROVES significantly. So unless Epic magically optimized AA to produce less overdraw, the overdraw viewmode is busted. I tested the same scenes in 5.2 and 4.21 and the overdraw view was much worse in 4.21 with it actually showing some red from opaque overdraw (all settings the same). I'm not even sure if opaque overdraw can get beyond green now. I would suspect the overdraw you show from extremely high poly meshes should actually be significantly worse and mostly red or white.
@tubaeseries57053 ай бұрын
msaa by nature causes more overdraw because it takes multiple samples for each pixel, with varying amount depending on the settings, so in some cases where only 2pixels are occupied, meaning 50% overdraw, with msaa it could be for example 6 pixels out of 16, meaning 60% overdraw etc.
@futuremapper_3 ай бұрын
I assume that Epic cheats the values when MSAA is enabled as that will cause overdraw in it's nature
@JorgetePanete3 ай бұрын
at least*
@neattricks76782 ай бұрын
It is true. Unreal sucks and everything about it is fucked
@FourtyFifth2 ай бұрын
@@neattricks7678 Sure it does buddy
@hoyteternal3 ай бұрын
nanite is specificaly made to render non-optimized scenes with ridiculous polycounts, and microgeometry when each pixel might render a separate triangle. and models that don't have lods. it has huge initial overhead but way higher scalability in this scenario. actually nanity is not a unique technology. it is an implementation of a techique called visibility buffer.
@florianschmoldt86593 ай бұрын
The current state of "next gen visuals" vs fidelity is indeed questionable. Many effects are lower res, stochastically rendered with low sample counts, dithered, upscaled. Compromises on top of compromises and frame generation in between. Good enough in 4k and 60fps but if your hardware can't handle it, you'll have to live with a blurry, pixelated, smeary mess. My guess is that Nvidia is fine with it. As much as I like Lumen&Nanite in theory but I'm not willing to pay the price. To be fair, it isn't the worst thing to have next gen effects available but there is a huge disconnect what gamers expect a game to look like, based on trailers and how it feels to play in 1080p at 20fps. JFZfilms defines himself as filmmaker and isn't great at communicating that he and his 4090 doesn't care much about realtime visuals or optimization. Tons of path tracing vs Lumen videos and a confused game dev audience when they accidentally learn that both went through the render queue with maxed settings.
@snark5672 ай бұрын
Gamers confuse realism and high fidelity visuals for good graphics. Meanwhile a lot of devs use realism and graphical advancements as a crutch because they lack the imagination to know how to make a game that looks good without these features.
@florianschmoldt86592 ай бұрын
@@snark567 I'm a graphic artist myself and as much as I love a good art direction, realism definitely has it's place. But I get the point. It for sure shouldn't be the only definition of next gen visuals. It became much easier to gather free photoscanned models than creating your own in an interesting style. Devs rely on nanite over optimized assets and even games with mostly static light sources use Lumen over "good old" lightmaps. Even if it could look just as good and makes a difference of 120 vs 30fps. And Nvidia is like "Here are some new problems...how about a 4090 to solve them?"
@IBMboy3 ай бұрын
I think Nanite is innovative technology but it shouldnt replace old style of rendering for videogames as its still experimental in my opinion
@matiasinostroza18433 ай бұрын
yeah and its not made for optimization but for the sake of look, how it looks its beauty, there is almost no transition between lods, so for example in normal games when you get too far u can see how the mesh changes to lower poly models but with nanite this is almos imperceptible, so in that case is "better", thats why is being used for "realistic graphics".
@Cloroqx3 ай бұрын
Baseless opinions by non-developers. "Old style of rendering videogames". What is your take on the economy's sharp downturn due to a lack of rate cuts by the FED, opinionated one?
@vendetta14293 ай бұрын
@@Cloroqx You're coming off as the opinionated one, if you didn't know.
@pizzaman113 ай бұрын
Also has lower memory footprint and you don’t need to worry about creating lods. Which is perfect with large games with a large amount of unique models.
@inkoalawetrust3 ай бұрын
@@Cloroqx What are you even talking about.
@chriszuko3 ай бұрын
As much as I think this is a good topic and the results seem genuinely well done, the solution proposed here to have spent the time and money on an AI solution for LOD creation and a much faster / seamless workflow for optimization... is something companies have been trying to do for years.. so I personally don't think it's a good way to move forward. TO me.. it would seem possible that nanite + lumen can evolve and become much more friendly to meshes produced in an already optimized way and rely way less on lower resolutions. I DO think they probably are pushing it a bit too early still and I also agree that DLSS, TSR, and any other upscaling/reconstruction technology is just not good to lean on. But to your point, companies continue to do so because they can say "look everything runs faster!" so it is hard to envision a future without needing it. A side note.. I don't think your particular attitude is warranted and, in my opinion, makes it much harder to have a constructive conversation on how to move forward. I'm not perfect either in this regard, but as this gains traction it's probably good to dial that back. In the first post for the mega thread for example you show your results and then feel the need to say "Worse without LOD FPS LMAO!". This type of stuff is just all over the place in these threads.. which to me just looks bad and makes it harder to take you and this topic seriously.
@wumi24192 ай бұрын
It ends up being a problem of "who needs to spend money", with choices being developers (on optimizing) or customers (on hardware that can run unoptimized software), and it's obvious which choice saves money for the company. Granted, it might result in lower sales due to higher hardware requirements, but that is a lot harder to prove than lowered dev costs.
@Viscte2 ай бұрын
This is a pretty down-to-earth take.
@exoqqenАй бұрын
i agree on the attitude. I'm amazed by the depth of knowledge in these videos but the aggressiveness makes me keep aa distance and be wary. Its not pleasant or professional
@Viscte29 күн бұрын
@@exoqqen the guy comes across as a bitter asshole for some reason. Its important people discuss topics like this but the negativity is definitely off-putting
@jarrodkober15 күн бұрын
Yeah he's comin in a little hot and I'm curious if everything is 100% factual. Some of his points seem credible and compelling. But he started to lose me when he said DLSS is over hyped and only compared to TAA. You can compair Native 4K no AA to DLSS and it's pretty good. We can all agree TAA is too soft but he creates a false dichotomy by quickly making that point (however grounded it may be) then moving on.
@Gurem2 ай бұрын
I remember epic saying this would not replace traditional methods but should be used in tandem with them as it is a way to increase productivity. Tbh this video taught me more about optimization than any optimization video and didnt waste my time. As an indie it did more to reinforce my desire to use nanite while also teaching me how to do more hands on techniques that while require more work may result in better performance that I can use when I have the free time to do so. I thank you for demistifing the BS as I really couldnt understand the tech from those other yt videos as they were purely surface level quick churned content.
@GraveUypo3 ай бұрын
this is the type of content i wish people would watch, but they don't. i'm tired of hearing that nanite is magic, that dlss is "better than native" and whatnot. when i made a scene in unreal 5, it ran like trash on my pc with 6700xt at the time, and there was almost nothing in the scene even with 50% render scale. it was absurdly heavy for what it was.
@SuperXzm3 ай бұрын
Oh no! Think about shareholders! Please support corporations play pretend!
@chengong3883 ай бұрын
DLSS is better than native because it includes anti aliasing and native does not…
@FlippingLuna3 ай бұрын
Just return to forward shading or even mobile forward renderer. It's kinda good and pity that epic made good optimization on mobile forward renderer, but still not ported it to desktop forward
@Jakiyyyyy3 ай бұрын
DLSS is INDEED sometimes better than native because it uses its own antialiasing than the default poorly implemented TAA that blurring the games. If DLSS makes the games sharper and better, I would take it over forced-TAA. 🤷🏻♀️
@dingickso40983 ай бұрын
DLSS is better than native, they think, because they tend to compare it to the blurfest TAA that is sometimes force enabled.
@hungryhedgehog42012 ай бұрын
So if I understand correctly: Nanite results in a performance gain if you just drop in high poly 3D sculpts without any touchups, but results in a performance loss if you put in models designed with the industry standard and use industry standard workflow regarding performance optimization?
@AndyE72 ай бұрын
I swear the point of Nanite was to allow the artists and level designers to just focus on high quality assets and scenes because nanite would do the optimisation for them. It was to stop the need for them to think of optimisation, LODs, how much can the console render in a scene etc because it will handle all of that for you allowing you to just focus on the task at hand. In theory you could even develop with Nanite and then do proper optimisations afterwards.
@hungryhedgehog42012 ай бұрын
@@AndyE7 tbf it seems to do that but that means that you move from the industry standard to this new approach that works with no other engine pipeline. Which then binds you to the Unreal Engine 5 ecosystem, which then obviously benefits Epic. That's why they push it so hard.
@Noubers2 ай бұрын
The industry standard is significantly more labor intensive and restrictive. It's not a conspiracy to lock in people to Epic, it's just a better work flow overall that other engines should adopt because it is better.
@Armameteus2 ай бұрын
@@Noubers Except it's not because that then off-loads the rendering overhead to the end-user. To render anything in nanite more quickly than through a traditional rendering pipeline would require the end-user to have the hardware necessary to accommodate it. This is simply unacceptable and shouldn't be the case. The end-user should, within reason, be able to expect their product (like a game) to be able to function on their hardware so long as it's within decent generational tolerances because the developers should have put in the effort to accommodate the end-user. That's part of the selling-point of video games. It's _their job_ to make a product worth our purchase. It would be like if Epic created an entirely new type of internal combustion engine that was incompatible with every single vehicle on earth, but was marketed as "easier" to build. Epic then forces factories they have contracts with to start manufacturing this new engine type because it's "easier" for _them_ to produce it - but it's still incompatible with all vehicles, everywhere. This then means the car manufacturers need to completely upend and rebuild their entire manufacturing process to accommodate this new engine design, which costs them a ton of money. The cost of this transition then gets off-loaded onto the regular customer - you and me - that's trying to buy a new car, because the cost of manufacturing these new cars is far higher now due to the alien design of their engine, which was the fault of Epic forcing this new engine on all of their manufacturers. And, even if you end up buying it, it will still run _worse_ than a comparable model car with a traditional engine design; it cost you more and you got a worse product out of it. Epic is forcing nanite as the default rendering pipeline going forward, meaning *you can't opt-out of it!* As a developer, this means the overhead of rendering your game falls to the end-user. As a result, your game can only be played by users with the absolute latest, bleeding-edge machines, because nanite only _works_ on those machines, due to its insane overhead. This instantly cuts your potential playerbase to a fraction of its original potential because the cost of purchasing a machine capable of rendering your game is going to be astronomical, not just immediately, but _exponentially_ into the future (as games attempt to render more and more complexity through nanite, chasing the dragon of "photo realism"). The only parties benefiting from this are Epic (for obvious reasons) and large development companies that have _contracts_ with Epic (for the same reasons). But small studios or indie devs? You're screwed; optimising in nanite is currently impossible and, as an indie dev, you probably don't have the hardware necessary to even render your _own game._ And the end-user? You're _double-screwed;_ you have to front the cost of the hardware to run the game _and_ the exponentially-increased cost of developing that game that anyone outside of Epic's sphere of partners had to eat in developing their game! As a result, both games _and_ the hardware needed to render them will cost more for the end-user to purchase (unless you're best buddies with the executives at Epic). Nanite is a complete sham and a waste of effort and resources, built upon self-imposed problems that didn't need to exist, with shoddy "solutions" to those problems, which then create _new_ problems without even fixing the _old problems_ to begin with! It's just Epic's way of pushing out anyone that doesn't have contracts with them, practically requiring corporate nepotism in order to operate within their market. It exclusively benefits them and their friends and they know it. InB4: "StOp BeInG PoOr!!1" because that's the _only_ argument against this nonsense.
@TheReferrer72Ай бұрын
@@Armameteus History is going to proof you dead wrong and its a pity so many of you dev's just don't get that hardware always always catches up. Artists and designers should not be making LOD's its a kludge, they should not be combining meshes its a kludge, baking in lights and whatever tricks have to be done when algorithms can do the work. Artists and designers should be focussing on making their games look and play good.
@happydappyman17 күн бұрын
We've been spoiled by blazing fast hardware to the point that we're now getting games that look worse AND run worse than their predecessors. "It's fine, everyone will be playing on at least a 3070 anyway".
@ThiagoVieira913 ай бұрын
Acquiring the same level of rage this guy has for bad optimization, but for putting effort into my SE career, I can single handedly lead us into the the singularity. MORE! MORE! MORE!
@JorgetePanete3 ай бұрын
the the
@DarkSession62082 ай бұрын
I have 100 other topics around Unreal Engine that make me rage about misinformation just like he rages about Nanite. The Nanite thematic i posted MULTIPLE times on Forums and Reddit to warn people how to not use it, and show how how it should be done. I was researching this topic since version 5.02. Nobody cared. Like i said there are 100 other similar topics (Blueprints, general functions, movement, prediction etc.) which are simply explained wrong by epic themselve and then aknowledged by users. If you google : Does culling really not work with nanite foliage? the first comment is mine, i was posting there since almost 3 YEARS and repeating myself just for people not listening.
@cenkercanbulut30692 ай бұрын
Thanks for the video! I appreciate the effort you put into comparing Nanite with traditional optimization methods. However, the full potential of Nanite might not be fully apparent in a test with just a few meshes. Nanite shines when dealing with large-scale environments that have millions of polygons, where it can dynamically optimize the scene in real time. The true strength of Nanite is its ability to manage massive amounts of detail efficiently, which might be less visible in smaller, controlled setups. It would be interesting to see how both approaches perform in a more complex scene with more assets, where Nanite’s real-time optimization could show its advantages. Looking forward to more in-depth comparisons in the future!
@2feetsandamushroom13 күн бұрын
Had no clue about performance, just how it handles lods for meshes on static objects and reduces notable popin. From my experience it reduces it greatly when built around it. Like Hellblade 2, barely any popin in sight for the entire game which was a breath of fresh air to be honest since lod shifts are one of the more annoying visual blemishes in games. The removal of tessellation is pure madness, why remove functions that we know work and have optimized for years? Like give us options!
@TenaciousDilos3 ай бұрын
I'm not disputing what is shown here, but I've had cases where Nanite in UE 5.1.1 increased framerates 4x (low 20s fps to mid 90s fps) in my projects, and the only difference was turning Nanite on. "Well, there's more to it than that!" of course there is. But Nanite took hours of optimization work and turned it into enabling Nanite on a few meshes.
@forasago3 ай бұрын
@@jcdentonunatco This is not true at all. When they showed the first demo with the pseudo Tomb Raider scene they explicitly said (paraphrasing) that having as many high poly meshes in a scene would be impossible without Nanite. And they definitely weren't talking about "what if you just stopped using LOD" as the thing Nanite is competing with. Nanite was supposed to raise the limits for polycount in scenes, full stop. That equates a claim to improved performance.
@MrSofazocker2 ай бұрын
@@forasago "You can now render way denser meshes" does not equal "so less dense meshes now render faster"
@DeltaNovum2 ай бұрын
Turn off virtual shadow maps and redo your test.
@lokosstratos71922 ай бұрын
@@DeltaNovum YEAH!
@user-bl1lh1xv1s2 ай бұрын
@forasago "unlimited polycount" in scenes does not equate to a claim of improved performance over non-meshlet approaches. It merely states that poly count is not an issue... Which certainly has implications for asset pipelines, where full-quality meshes _could_ be brought into the scene without prior processing (except, of course, the nanite preprocessing step).
@ProjectFight3 ай бұрын
Okey... a few things. This video was way faster than I could follow. But it was sooo interesting. I love seeing the technical aspect of how games are properly optimize, what counts and what not. And I will ALWAYS support those that are willing to go the extra mile to properly research this things. Sooo, new sub :)
@NeverIsALongTime2 ай бұрын
XD I know Kevin in real life; in real life he speaks way faster! He's chill in his videos. He is brilliant, probably a bit on the spectrum (in a good way). He is more passionate and even more intense in real life. I have read some of his screenplay for his upcoming game it is terrifying & edge-of-your-seat exciting!
@XCanG3 ай бұрын
I'm not game developer, but there is some questions that I have to ask and some personal opinion as a gamer. 1. The main point I remember for introducing Nanite at first was that you can make models faster by not spending time on creating LODs. So in some of your examples comparison: how much time do you need to create same model with Nanite vs same model with LODs? Considering that all fresh games that being made with a bias in realism require a lot of details I think at a scale it will make a difference. 2. May be because I'm not into this field I don't head that, but I really didn't hearing anyone who would render model with billions of triangles. The only closest example was Minecraft clones who was rewritten in C/Rust etc. who tried to achieve large render distance. Other rendering scenes where 1 frame make take hours of render time, so it was not realtime, but I really didn't hearing anyone else showing examples like that. I can imagine that you are some Senior game developer with at least 6 years of experience, but how many game devs also know about this optimizations? I can't imagine even more than 25%. 3. Let's assume that you can optimize them better, does UE5 allow you to handle optimization by yourself? I imagine that UE4 do, but what about UE5? If answer is yes, then this problem is more arguing about better defaults where you have manual optimization with a cost of your time vs nanite auto-optimization with faster creation time. 4. As a player I want to point out that many latest titles comes with bad optimizations, so much that gamers starting to hate their studios. My personal struggle was with Cities: Skylines 2. They use Unity engine where they definitely have all this abilities for optimization, but somehow they released lagging p* of s*, where some air conditioner on a building that you see from a fly height (far away) had 14k triangles and some pedestrians have 30k triangles. Considering that they can't optimize properly I believe if incompetent devs like them just use Nanite the game wouldn't be this laggy. For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals.
@XCanG2 ай бұрын
@@miskliy1 I see.
@MiniGui982 ай бұрын
"For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals" Fair point, although the "manual" technique has been used since basically the beginning of fully 3D games and had been mastered not by "few individuals" but by the whole industry at some point. Throwing everything in the nanite pipeline just because it's simpler and faster is a false excuse once you know that the manual, traditional techniques will get you better performances with little to no difference in visual fidelity. Even better, the extra performance you free with the manual LODs allow you to minimize the need for FSR or DLSS, both of which indisputably degrades image fidelity as well. Performance tests with FSR/DLSS enabled are basically a lie over what the performances of the raw game are. Native resolution with more traditional anti-aliasing techniques should still be the norm as it always has been. Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that.
@SioxerNikita2 ай бұрын
@@MiniGui98 No, it's never been "the whole industry" at some point. Frankly, the good optimizers are kind of far between, from the first days of optimization in general (not just 3D). Beyond that, optimizations are not a thing that you just apply every time, if so, everything would be using those optimizations. An optimization that works in one product, might not work in another, and might sometimes make it slower. Optimizations are not "Catch all solutions", and each project needs different optimizations... so there isn't a "manual" technique that has been "mastered". Beyond that, you have a different problem. Optimizations can only really be applied when the product is mostly done, and the rendering pipeline is done, so if the rendering pipeline gets continuesly developed until the end, then optimizations might be actual wasted work, or worse, create a buggy mess, because the rendering pipeline changes stuff that doesn't interact well with the optimizations. And then you have some of the most important stuff... Time... Development time... If you have an automated pipeline that is for example 30% better than you not doing manual optimization, then that tons of time you wouldn't spent on optimization, could be spent on doing optimizations and improvements elsewhere. For example, in LoD models, those take time to make... sometimes SIGNIFICANT amount of time. You create a model, then you have to create a lower poly model, and an even lower poly model model, and if you ever need to change that model, you have to do the lower poly versions again. That is a LOT! of time. Sure, may have better performance, but if you can offload that work to for example Nanite and it gives acceptable performance, you can focus on optimizing the main model instead. Performance tests with FSR/DLSS are a lie? If the game is intended to be run with FSR/DLSS, then that is the performance test you should do. Frankly, considering that you just focus on manual optimization being better, and NONE! of the other factors in it, kind of shows me that you don't really know much about optimizing or game development... and saying "Games relying on Super-Sampling are signs of a badly optimized game"... Even more interesting, Super Sampling is not just "DLSS" or similar... Do you even know what Super Sampling is? Relying on super sampling is the opposite of optimizing, because it is one of the more computationally expensive features. It is essentially rendering a LARGER image than you need... That is something you use if you either have optimized heavily, so you can afford it, or you have an option for it for people who have killer setups... Super Sampling is not an optimization technique at all. DLSS is kind of the opposite of Super Sampling. It uses Deep Learning (Or as we would call it today "AI") to infer a lot of information, using various different techniques, essentially getting some of the effects of Super Sampling, but without actually Super Sampling. So you saying "Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that." essentially means you have basically no clue what you are talking about... You start out saying "fair point" and then proceed to show you don't even understand the point. And relying completely on a single automated optimization technique is always bad... it's not just a "big games" thing... but almost no games do... but optimization is hard to do, very hard... if it wasn't you'd see games that were badly optimized.
@Megalomaniakaal2 ай бұрын
@@MiniGui98 Majority of games out there were rather unoptimized messes that Nvidia and AMD optimized on 'game ready' drivers release level in the DX11 and older days.
@Jiffy3602 ай бұрын
About that final point about CS:2, that was exactly their plan. They were going to use Unity’s competitor to Nanite, but then Unity delayed or cancelled it, so they were stuck creating a brand new LOD system from scratch at the last moment.
@salatwurzel-43882 ай бұрын
Remember the times when {new technology} was very ressource intensive and dropped your fps but after a relative short time it was ok because the hardware became 3x faster in that short time? Good times :D
@histhoryk26489 күн бұрын
Can it run Crysis meme
@iansmith3301Ай бұрын
The fact that Epic is telling developers to go ahead and use 200GB 3D models without optimization because Nanite 'can' render them in the engine should send off huge red flags because artists will think that it's fine to not optimize.. Everyone's HDD is fked as well.
@therealvbw2 ай бұрын
Glad people are talking about these things. You hear lots of chatter about fancy UE features and optimisations, while games get slower and look no better.
@yeah72673 ай бұрын
Finally someone taking about this... I was sick ok people claiming Nanites boost performance when in reality I was losing frames even in the most basic scenes
@cheater003 ай бұрын
just another case of tencent timmy being confidently wrong. we've all known unreal engine was way worse than quake 3 back in the day as well, it was ancient by comparison and the performance was abysmal. the fact people kept spinning a yarn that unreal engine was somehow competitive with id tech was always such a funny thing to see.
@HankBaxter3 ай бұрын
And it seems nothing's changed.
@cheater003 ай бұрын
@@randoguy7488 precisely. and in those 25 years, NOTHING has changed. this should make you think.
@ClowdyHowdy3 ай бұрын
To be fair, I don't think nanite is designed at all to boost performance in the most simple scenes, so anybody saying that is either oversimplifying or just wrong. Instead, it was designed to provide an easier pipeline for artists and designers to build complex level designs, without inducing an equivalent increase in performance loss or time into developing LODs. It's designed to be a development performance boost. If you don't need nanite for your games then there's no reason to use it, but I think it's weird to pretend like the issue is because it doesn't do something it wasn't designed to do
@_Romulodovale3 ай бұрын
All the last features in engines came to make developers life easier while affecting performance negatively. Its not a bad thing, in the future those features will be well polished and will help us developers without affecting performance. All good tech take time to be polished.
@GonziHere3 ай бұрын
Interesting video, but isn't nanite using it's own software solution for the small triangles exactly because of that issue? I'm pretty sure that they've said so when they started talking about it, it's in their tech talks, etc. This test feels somewhat fishy to me. Why not compare the normal scene, instead of captured frame, for example? That skips part of the work of one example, while using the full pipeline of the other...
@AdamKiraly_3d3 ай бұрын
I would love to get my hands on that test scene to give it a spin. I've been working with Nanite in a AA and AAA setting for almost 3 years now, and while it was an absolute pain to figure out the quirks and the correct workflows it has been overall a positive thing for production. In my experience Nanite REALLY struggles with anything using masked materials or any sort of Pixel Depth edits. I've also seen issues with perf when most meshes were "low poly" in the sense that the triangles are very large on screen, I vaguely remember the siggraph talk where they mention that larger triangles can rasterise slower because it's using a different raster path Nanite handling its own instancing and draws moves a lot of the cost off the cpu onto the GPU so the base cost being more should be a surprise to no one. It is also very VERY resolution dependent. The higher you go in res the exponential the cost of Nanite (and VSM, Lumen) not to mention the general cost increase of a larger res. I've grown to accept the only way forward with these new bits of tech is upscaling. I'm not happy as a dev, but as a gamer I couldn't care less and use it everywhere. VSM has similar issues, since for shadows you effectively re-render the nanite scene, but there are ways to optimise that with CVars, and generally it's been better performing than traditional shadow maps would when used with nanite - there's a caveat there, since if you bring foliage into the mix then nanite can get silly expensive both in view and in shadows. Collision memory has also been a major concern since for complex collision UE uses the Nanite fallback mesh by default, so in a completely Nanite scene you can end up with more collision mem than static mesh memory I also feel like having a go at Epic for not maintaining "legacy" feature compatibility indefinitely is a bit unfair. Both VSM and Lumen rely on Nanite to render efficiently and were created as a package. Epic decided that is the direction they want to take the engine, an engine that is as complex as it's large, it is almost expected to lose some things along the way. That being said I have run into many things that I wish they cared enough about to fix (static lighting for example has a ton of issues that will never be fixed cause "just use dynamic lighting"), but at the same time I won't start having a go at them for not supporting my very specific usecase that is not following their guidance. No part of this tech is perfect and I get the frustration, but it did unlock a level of quality for everyone to use that was only really available in proprietary engines before. Also do we really thing CPPR would drop their own insane engine for shits and giggles if they didn't think it was a better investment to switch to UE5 than to upgrade their own tech? same with many other AAA companies, and you can bet your ass they took their sweet time to evaluate if it makes sense or not (not to mention they will inevitably contribute to the engine a TON that will make it back to Main eventually)
@ThreatInteractive2 ай бұрын
re: Why not compare the normal scene, instead of captured frame, for example? You can compare yourself because we already analyzed the "normal" scene, same hardware, same resolution: kzbin.info/www/bejne/ipacqYiEqrdgi5I Watch the rest of our videos as everything we speak about is interconnected.
@SioxerNikita2 ай бұрын
@@AdamKiraly_3d A thing a lot of non-devs forget, is time... Time is the ultimate decider on quality, and the more time spent on optimizing, and making things better, the less time is spent on the game. CDPR is dropping their own engine because at this point, upgrades to the engine that either makes it more performant, or have higher fidelity takes exponentially longer with each feature, and they don't have a "specific" game the engine needs to do. If it was only FPS games they developed, having a single engine they'd upgrade continuesly might make sense, but they want to make more games than just FPS. Every second they use on upgrading the engine or doing tedious graphical optimizations on models, is time that could've been spent fixing physics bugs, weird interactions, broken map geometry, and other stuff, that will in the end make the game feel a lot better... and they can also focus on larger scale optimizations that might increase performance more overall, because you don't have to do 2-5 LoD models per model, you can instead focus on getting the poly count significantly down on the base model, and increase the perceived fidelity... The days of hyper optimization of games is over. Hardware is becoming ever more complex, and it is becoming completely infeasible to talk to hardware directly, as there are SOOOO!!! much different hardware you'd have to account for. Games are becoming prettier, and even the gamers saying "Graphics don't matter" still wont buy a game that looks like a PS1 game, because it gives them a feeling of "Shoddy game". Increasing poly counts, increasing player expectations, and ever more competitive market... yeah, devs need tools to automate a lot of the process.
@satibel2 ай бұрын
@@SioxerNikita imo graphics don't matter is more of a "yeah realistic graphics are neat, but consistent graphics are better" I'll take cartoony graphics like Kirby's epic yarn over battlefield graphics. yes realism looks good, but stylized does too, and a well stylized game can both be way more efficient and age way better, while still looking as good or better than wannabe photorealistic graphics. If we're talking about ps1, a game like vib ribbon still looks good nowadays, and for a more well known example, crash bandicoot. They look miles better than asset flip looking franchise games that have realistic graphics, but aren't that cohesive and have a meh game under.
@lycanthoss2 ай бұрын
@@satibelRealistic graphics clearly sell. Just look at Black Myth Wukong.
@GugureSux2 ай бұрын
This explains both why I generally despise the modern UE games' looks (so much noise and blur) and why especially UE5 run like ass, even under decent HW. And since so many devs seem to use lazy upscalers as their "optimization" trick, things only get worse visually.
@kanta321002 ай бұрын
No visible LOD pop in is nice though.
@bennyboiii11962 ай бұрын
The blurriness is due to TAA tbf, which is a scourge. This is why I use Godot lmao
@ThreatInteractive2 ай бұрын
@@kanta32100 Not true, you will still get pop and LODs have had various ways on reducing pop-visibility(just not in UE5) in a non-flawed TAA dependent way.
@PabloB8882 ай бұрын
@@bennyboiii1196 TAA image looks blurry, but a good sharpening mask can help a lot. I use reshade CAS and luma filters. I have been using this method for the last couple of years on my old GTX1080. Now however I bought the RTX4080S and can get much better image quality by using DLSS balance and DLDSR2.25 (80% smoothness) at the same time. I get no performance penalty, and the image quality destroys native TAA. What's more if DLSS implementation is very good I can use DLSS ultra performance and still get sharper image compared to the native TAA and framerate is 2x better at this point.
@cdmpants2 ай бұрын
@@bennyboiii1196 You use godot because you don't want to use TAA? You realize that TAA is optional?
@grzes8489094 күн бұрын
Greed and laziness. It's always those 2.
@YoutubePizzer3 ай бұрын
Here’s the question though: is this going to save significant enough resources in a development team to allow them to achieve more with less. If it’s not “optimal”, it’s fine, as long as the amount of dev time it saves is worth it. Ultimately, in an ideal world, we want the technology to improve so development can become easier
@wumi24192 ай бұрын
Cost of development doesn't disappear however, it's just transferred to customers. So they will have to either pay for a better GPU, run the game at lower quality, or will just "bounce off" and not purchase the game at all.
@blarghblargh2 ай бұрын
@wumi2419 they could just make the game look worse instead. The development style being done now is already burning something, and that something is developer talent. And there isn't an infinite pool of that. You also ignored that GPU performance increases over time. So it may be a rough tradeoff now, but the tech will continue to get better hardware over time.
@insentia84242 ай бұрын
I don't understand why you ask about whether this allows development teams to be more efficient (achieve more with less), then you believe that in an ideal world tech would make things easier. Something becoming more efficient, and something becoming easier are not the same thing. In fact, an increase in efficiency often times causes things to become harder or more complex to do.
@snark5672 ай бұрын
You can always just go for stylized visuals instead of hyper complex geometry and realism. This performance min maxing only becomes an issue of concern when your game is too detailed and complex but you still want it to run on a toaster.
@seeibe2 ай бұрын
The problem is when newer games look and perform worse than older games while still costing the same. If that saved developer time will be passed on as price reduction to the user, sure. But that's not what happens for most games.
@piroman6653 ай бұрын
Its normal that new generation of rendering techniques introduces large overheads. Its a good tradeoff as it streamlines development and enables more dynamic games, Major issue is that developers ignore good practices and then blame the software, part of that is also epics fault as they try to sell it as magic solution for unlimited triangles which is not. Nanite might be slower but enable scenarios that would be impossible or very hard to achieve with static lighting. Sure you can render dense meshes with traditional methods but imagine lightmapping them or making large levels with realistic lighting and dynamic scenarios.
@SydGhosh3 ай бұрын
Yeah... In terms of all this dude's videos - I find myself technically agreeing; but, I don't think they see the big picture.
@thegreendude20863 ай бұрын
@@piroman665 I believe unreal was made to be somewhat artist friendly, systems you can work with even if you do not have a deep technical understanding. Hitting the "enable nanite" checkbox so you have to worry less about polycount seems to fit that idea.
@lau64382 ай бұрын
@@SydGhosh The bigger picture being deprecating traditional LODs that perform better, to implement a half-baked solution? Wow, what a nice picture.
@DagobertX22 ай бұрын
@@lau6438 They will make it better with time, just like back in the gamedev stoneage they made LOD perform better. Imagine there was a time where you had to optimize triangle strips for a game for best performance 💀
@Bdhdh-p7hАй бұрын
@@lau6438 LODs are costly and expensive to studios to develop.
@astrea5553 ай бұрын
Incredibly in depth video once again, really inspiring. I'm only dabbling into game dev but this explain so much about what we've started to see in recent games!
@badimagerybyjohnromineАй бұрын
Whenever someone posts a comparison showing unity has better raytracing, and better performance people lose their mind and mass dislike the video and its been this way for years. Im glad its starting to change, i dont really like using unitys engine, sure its better looking and is better performing but my hopes is unreal improves most of all.
@QuakeProBro3 ай бұрын
Great video, you've talked about many things that really bother me when working with UE5. Extreme ghosting, noise, flickering, a very blurry image, and compared to Unreal Engine 4, much much worse performance with practically emtpy scenes (sometimes up to 80 fps difference on my 2080ti). All this fancy tech, while great for cinematics and film, introduces so many unnessessary problems for games and Epic seem to simply not care. If they really want us to focus on art, instead on optimizing, give us next-gen worthy and automated optimization tools instead of upscalers and denoisers that destroy the image for a "better" experience. This is only battling the symptomes. And don't get me wrong, I find Lumen and Nanite fascinating, but they just don't keep what was promised (yet). Thanks for talking about this!
@keatonwastaken2 ай бұрын
UE4 is still used and is capable, UE5 is more just for people who want fancier looks early on.
@AdamTechTips273 ай бұрын
I've been feeling and saying the same thing for years. Just doesn't have the concrete testing yet. This videos proves it all. thankyou
@koctf384618 күн бұрын
Nanite advocates kitbashing, kitbashing brings worse performance, worse performance=need to buy new hardware. That's why we often see triple-A titles with similar graphics but require 4x more computation power than before.
@Lil.Yahmeaner3 ай бұрын
This is exactly how I’ve felt about graphics for years now. Especially at 1080p, all the ghosting, dithering, and shimmering of UE5 gets unbearable at times and everyone is using this engine. It’s like you have to play at 4k to mitigate inherent flaws of the engine but that’s so demanding you have to scale it back down which makes no sense. Especially bad when you’re trying to play at 165hz and most developers are still aiming at barely 30-60fps, now exacerbated by dlss/framegen. Just like all AI, garbage data in, garbage data out, games are too unique and unpredictable to be creating 30+ frames out of thin air. Love the videos, very informative and well spoken. Keep up the good fight!
@nikch13 ай бұрын
> "all the ghosting, dithering, and shimmering" Instant uninstall from me. Bad experience.
@Khazar3213 ай бұрын
For years now? Yeah I don't think that's UE5 mate. Maybe stop hopping on the misinformation train here...
@s1ndrome1172 ай бұрын
@@Khazar321 you'd understand if you ever use unreal for yourself
@MrSofazocker2 ай бұрын
The notion that these defaults cannot be changed, and it's somehow a systemic issue of Unreal is insane to me. If you just use smth you don't even understand leave everything on default on press a button "make game", what kind of optimization do you expect?
@Khazar3212 ай бұрын
@@s1ndrome117 I did and I have also seen the train wrecks that lazy devs cause with UE4 and engines around the same time. Horrible stutters, bad AA options, blurry and grey(unless you fix it yourself with HDR/ReShade), shader issues, etc. So yeah tell me how horrible UE5 is and what lazy devs can do wrong with it. I have seen it all in 30 years of gaming.
@legice3 ай бұрын
Finally a video that talks about nanite! Honestly, nanite saved a project, because we had 100 meshes with 20mil poly each and nanite made it work on machines that had no right to be able to, but… It is in no way a silver bullet and the day to day use, as a quick LOD it is not. As a modeler, there are rules to modeling and if you do it right, you need a day max to optimise a big ass mesh, which you know how, because you made it! “Quick” and dirty modeling exists and optimisation down the road, but when you are making the prop, you KNOW or at least understand what and how to do it, for it to be the least destructive. Non destructive modeling exists, but it brings in different problems, such as time, approach, workflow and unless it requires it, you dont use it, as its a different beast all together. You can model a gun any way you want, but a trash can, house, something non hero, non changing and that has measurements set in stone, you do it the old fashion way. Texture and prop batching is simple, but being good at it is not. I love lumen, but it is clearly still in the early stages and needs additional work to be optimized for non nanite and optimized workflows. Im just so happy I wasnt the only one going insane about this
@XCanG3 ай бұрын
I have a comment above with my opinion on this, but I'm not working in game development, so my knowledge is limited. Considering that you are modeler I have a few questions to you: 1. How long does it take to take a model for nanite vs LODs? 2. How many years you work/have experience as modeler? It's for the sake that pro's making stuff way quicker, so that difference one vs another may vary depending on experience. 3. How much you are aware of the optimizations mentioned in the video? My opinion is that he have at least 6 years of experience and probably already some Senior gamedev, but it's hard to imagine that new gamedevs would have that knowledge. 4. Do you think nanite is useful right now? Do you think it will be useful in the future? (may be with some polishing and fixes)
@legice2 ай бұрын
@@XCanG Sure, I can answer those - nanite is obviously just a click, so nothing to do there really. As I said, when modeling, you ALWAYS plan ahead, so when you are doing retopo, its just a matter of how well you preplanned all your steps beforehand. So as there are rules to modeling, there are good practices in place for a long time and if you follow them, you are going to take a bit longer to make a prop, but when doing retopo/LODs, its going to take only a fraction of the time needed. Cant really time this, because its how you should be modeling regardless, unless you are doing for visualisation or movies only, where they dont need to be optimized. - professional, none really, as everything I have done is on personal works and game jams, due to the game dev industry being a bitch to really get in. There are steps or approaches, but in the end it dosent matter, as long as you deem it to be the best approach time wise, modeling wise and within budget, but most have the same workflow, because if you hand it off to somebody or leave the company, other need to be able to take your work and adapt/finish it. - very little/barely understood anything, because I learned from doing it and adapted my workflow base on the information I got, while searching for solutions. Honestly, you can basically skip anything he said, because that is all theory in the same way of how light works in games. As a modeler, you dont really need to know exactly how it works, but you get a feeling and slight understanding of how any why. He is going way more into technical stuff, something tech artists and programers deal. As a senior modeler, you touch this, but in the end your job is to do other things, such as modeling well, texture packing, instances, draw calls, modular design... in some areas and studios, this gets mixed in between and I 100% guarantee you, that most dont know it, even seniors, but they compensate in other areas. - nanite is already useful, it will be more useful, but limited. The fact is, nanite is a constantly active process going on in the screen, where as LODs are a one and done. LODs will never go away, but their dependency will be reduced, as less and less optimization will be needed to make games work so well, because computers are getting better. As I said, nanite straight up saved a project of ours when it was still in alpha/beta, so now if you use it for trying out how something looks in game, stress testing a rig pre optimization or whatever, it has its place, but should not be overused and has limitations. You cant use it on an skeleton/rigged model for example, as it relies on a constistent poly mesh. Take everything I said with a grain of salt. The video explained everything BEAUTIFULLY and I understand things I have unknowingly been doing for years, but never really grasped why, but I knew it worked. I learned things my way, studios teach their way, opinions clash and in the end, nobody really knows what they are doing, only how they feel they should be done and the final result dictates that.
@Dom-zy1qy3 ай бұрын
I am so glad I clicked on this video. Im a noobie when it comes to graphics programming, but I've learned quite a lot just hearing you talk about things. I didn't know shaders are ran on quads, I thought shaders were per-pixel. Maybe the reason behind this relates to the architecture of the gpus? Some kinda SIMD action going on?
@jcm260617 күн бұрын
It's because of how texture samples with automatic mipmap selection works. To figure out what mip level you need to sample from, you need to figure out how far away the sampling points are between adjacent pixels on the screen. Ideally you want as close to a 1:1 mapping between pixels on the screen and texels sampled from the texture, so that moving one pixel across on the screen moves as close to one texel across on the texture. Since pixels are ran on quads, this process is basically free. Pixels in a quad are executed in lockstep with each other within the same SM/CU on the GPU (alongside the other threads within their thread group, though that's getting a bit into the weeds), and so each pixel should be able to access the data of all other pixels within the same quad, assuming they've taken the same branch through a uniform control flow region (fancy way of saying they're in the same section of code). This lets the GPU access the texture coordinates of all four pixels in the quad at the same time, without any additional work. The GPU can use the texture coordinates to figure out how pixels on the screen map to texels sampled from the texture, so that it can calculate the optimal mip level to sample from.
@sandybeach95Ай бұрын
I absolutely love these videos. God willing the awareness you've shed on this matter will engender positive change in the industry
@nazar73683 ай бұрын
Epic games marketers killed the fifth version of the engine. at the time of 2018, there were no video cards that supported Mesh Shaders. They took advantage of this and added the illusion of cheating support for old video cards. Lumens and nanites were supported on old video cards, but this is all software implementation, which cannot be at the level of hardware mesh shader and ray tracing. This led to real problems with the engine kernel, namely that DX12 and Vulkan do not work correctly and have low efficiency due to the old code that was written for DX11. I'm not taking into account the problems with the engine's layer rendering and compositing algorithms, because everyone already sees the eternal blurring of the picture and annoying sharpening. This will not be fixed until they add hardware support for mesh shaders and rtx of the new version (added literally by the dxr 1.2 library). For example, Nvidia has many conferences to show their algorithms for improved ReStir in unreal engine 5, and gives access to them, but developers are firmly in their own way and continue to feed the gaming industry with an anthem.
@JoseDiaz-he1nr3 ай бұрын
idk man Black Myth Wukong seems like a huge success and it was made in UE5
@manmansgotmans3 ай бұрын
The push to mesh shaders could have been nvidia's work, as their 2018 gpus supported mesh shaders while amd's gpus did not. And nvidia is known for putting money where it hurts their competitor
@Ronaldo-se3ff3 ай бұрын
@@JoseDiaz-he1nr its performance is horrendous and they reimplemented a lot of tech in-house so that it can run atleast.
@topraktunca18293 ай бұрын
@@JoseDiaz-he1nr yeah they made a lot of hard internal optimizations and in the end its running on a "ehh well enough I guess" level. Not to mention the companies that didn't bother to do such optimizations like Immortals of aveum, Remnant 2. These games are unplayable other than 4090s or 4080s. Even then they still have problems
@nazar73683 ай бұрын
@@JoseDiaz-he1nr A really great success. With a 2010 picture and 30fps on 4090 with a blurry picture
@bits360wastaken2 ай бұрын
10:49, AI isnt some magic silver bullet, an actually good LOD algorithm is needed.
@michaelbuckers2 ай бұрын
He's talking about a hypothetical AI based LODder with presumably better performance than algorithmic LODders. Which is a fair guess considering that autoencoding is the bread and butter of generative AI. I hazard a guess that you can adapt a conventional image making AI to inpaint an original mesh with a lower poly color-coded mesh from various angles, and use these pictures to reconstruct a LOD mesh.
@yeahbOOOIIÌIIII2 ай бұрын
What he is suggesting is amazing. I have to jump through hoops to get programs like reality capture (which is amazing) to simplify photogrammetry meshes in a smart fashion. They often destroy detail. This is an example where machine learning could shine, making micro-optimizations to the LOD that lead to better quality at higher performance. It's a great idea.
@MarcABrown-tt1fpАй бұрын
@@yeahbOOOIIÌIIII Of course the meshes would need to be hand made still. Generative AI doesn't seem to work well with limits like quads imposed.
@bionic_batman20 күн бұрын
AI (machine learning) is just a way to approximate the algorithm you don't know In this particular case it would be better than Nanite because it does not need to be baked into the engine and can be replaced with manual/algorithmically-produced LODs when needed
@jcm260617 күн бұрын
It's just that, though, an idea. He doesn't actually provide any information on how _exactly_ a neural network could be trained to do this, he's basically just saying "just throw AI at it!" and pointing to RTX Remix using AI to convert flat albedo to material data, which is a different problem to solve (and we already have traditional solutions for this, though they're pretty bad). Just throwing an arbitrary neural network at a problem isn't good enough to deride Nanite and claim that an alternative exists, especially if you don't provide anything to back that claim up.
@almighty1519862 ай бұрын
Nanite is designed for geometry way above what we currently have in games. So until games reach the point where their geometry is high enough to require Nanite then Nanite will be slower than traditional methods. Maybe next generation will get there.
@youtubehandlesuxАй бұрын
With the current performance increase of hardware it'll take like 20 years to reach that point.
@korcommanderАй бұрын
We are at the point of diminishing returns of poly count. Like how many more polygons do we need to render 2Bs buttcheeks?
@rimuruslimeball3 ай бұрын
These videos are amazing but to be honest, a lot of it flies over my head. What should we, as developers (especially indie), do to ensure we're doing good optimization practices? Alot of what your videos discuss seem to require an enormous amount of deep-level technical understanding of GPUs that I don't think many of us can realistically obtain. I'm very interested but not very sure where to start nor where to go from there. I'm sure I'm not the only one.
@cadmanfox3 ай бұрын
I think it is worth learning how it all works, there are lots of free resources you can use
@marcelenderle49043 ай бұрын
As an Indie developer I feel that it's very important to know the problems and limitations of those techs and the concepts behind good practices. That doesn't necessary mean you have to apply them. Nanite, lumen, dlss etc can be very efficient as a cheap solution. If it speeds up your game by alot and gets to the result you want, for me at least, it's what you should aim for. Those critiques at Unreal are great for Studios and the industry itself.
@Vysair3 ай бұрын
I have a diploma in IT which is actually just CS and I dont have a clue on what this guy is talking about as well
@JorgetePanete3 ай бұрын
A lot*
@Vadymaus3 ай бұрын
@@anonymousalexander6005. Bullshit. This is just basic graphical rendering terminology.
@sorakagami2 ай бұрын
Thanks for doing an in-depth analysis on Nanite vs LOD. As a self-taught solo UE game dev, nanite allowed me to focus on making my game without needing to optimise for LODs. Personally, I find Nanite to be a fantastic tool / feature and Epic has been doing an awesome job making everything more and more efficient and easier to use over time. Nanite also allowed us devs to use assets like those from MegaScans without some form of decimation done in advance. Just drag'n'drop. As you demonstrated in your video, LOD optimisations will still be more performant and sometimes by quite a bit. However that requires extra time and known-how to implement and optimise for. Time solo devs and small teams can put towards getting a game out faster or towards features/bug-fixes. Your analysis shows that devs with the expertise and experience can make a huge difference to game optimisation. Keep up the great work.
@MaxineOnGasoline3 ай бұрын
Quality video. Thanks for clarifying this.
@Bitshift1125Ай бұрын
9:20 This dithering is so, so common nowadays and it looks TERRIBLE! Hair especially just looks like a noisy mess in most new games, and there is no way to turn your settings up enough to make them look good due to dithering. Then everyone just tells you that TAA and upscaling fix it. I don't want either of those technologies active, because they ruin the final image no matter what you do. It drives me nuts that people say "Oh, there's a visual issue? Turn on a bigger one to fix it"
@devonjuvinall54092 ай бұрын
Great watch! I would also recommend Embark's Example-based texture syntheses video. They get into photogrammetry and their testing on the software for 3D Props. It's just rocks using displacement maps but I think the whole video could be relevant to this situation. I don't know enough to be confident though haha, still learning.
@hoyteternal2 ай бұрын
nanite is an implementation of rendering technique called visibility buffer, this technique is specifically created to overcome quad utilisation issues. once triangle density reduces to a single pixel, the better quad utilization of Visibility (nanite) rendering greatly outweighs the additional cost of interpolating vertex attributes and analytically calculating partial derivatives. you can search for an article called "Visibility Buffer Rendering with Material Graphs", it is a good read on filmic worlds website, with lot of testing and illustrations
@ThreatInteractive2 ай бұрын
In the original paper on visibility buffers, the main focus was on bandwidth related performance. Visibility buffers might not be completely out of consideration. We've spoken with some graphic programmers who have stated their implementation can speed up opaque objects, but we are still in the process of exploring the options here. While Nanite is a solution to real issues, it's a poor solution regardless because the cons outweigh the pros. We've seen the paper you've mentioned and we also shown other papers by filmic worlds in out first video ( which discussed more issues with Nanite) .
@thediscreteboys33153 ай бұрын
You are fighting the good fight man. I just wish more people would sit down and learn about this stuff.
@ThatTechGuy1232 ай бұрын
Thanks for the information. I'll keep these things in mind as I develop.
@Billy-bc8pk2 ай бұрын
Man I'm so glad I came across this video. I've been talking about the downsides to Nanite since it has been touted during the tech demos and fake scenes setup to sell the snake oil to developers. Everyone goes on and on about Nanite.... IF they have never used it to optimise their game. Most games only use it in linear, closed in scenarios because of the optimisations problems pointed out int he video, and even then the games oftentimes run like crap without looking all that much better than games that came out ten years ago. the micro-stuttering and load delays due to buffering and poor optimisation are so apparent these days but everyone keeps ignoring it and just pushing forward in what we now consider to be the era of corposlop.
@DeeOdzta3 ай бұрын
Amazing work and research thank you for sharing this, many devs I know have had similar opinions of Nanite.
@gameworkerty3 ай бұрын
I would kill for an overdraw view like unreal has in Unity, especially because there is a ton of robust mesh instancing support via unity plugins that unreal doesn't have.
@597dasАй бұрын
great video! if anyone is incentivized and capable of making free topology optimization tooling its AMD. had no idea why the AAA graphics optimizations had become so stagnant until today!
@mike64_t2 ай бұрын
Good video, but I disagree that you can currently train an AI model to reduce overdraw. There is no architecture currently that can really take an AAA model as an input.
@ThreatInteractive2 ай бұрын
We are more detached from utilizing AI for implementing the max surface area topology than most people are giving us credit for. We just need faster systems for LODs. One of the biggest problems with the LOD workflow in UE vs Nanite is LOD calculations is extremely slow compared to Nanite(near instant even with millions of triangles). We also need a polished system that bakes micro detail into normal/depth maps faster. The way we see is it's always going to be an algorithm. Most AI that get trained enough revert to one anyway.
@mike64_t2 ай бұрын
@@ThreatInteractive "most AI that get trained enough revert to one anyway" mhhh... I wouldn't say so. Yes, in a sense its an algorithm but the sort of compactness, discreteness and optimality that you picture when you hear the word "algorithm" is not present in a whole bunch of matrix multiplications that softly guide the input to its output. Just because it is meaningful computation doesn't make it deserving of the word algorithm. The LTT video isn't really accurate and makes some dangerous oversimplifications. I agree that tooling needs to become better, I also would love for there to be a magic architecture that you could just PPO minimize render time with and invents all of topology therory from that, but that is a long way to go... Transformers are constrained for sequence length and have a bias towards discrete tokens, not ideal for continous vertex data. For now it seems like you need to bite the bullet and write actual mesh rebuilding algorithms.
@MattheouwАй бұрын
thanks for informing people on this subject.
@MrSofazocker2 ай бұрын
What this fails to capture is Nanite uses Software rasterizer, which doesn't suffer from quad overdraw at all. Small enough Clusters of Vertecies are offloaded to a software rasterizer and geometry assembler. The performance degradation comes from not using GPU ResizeableBAR most likely or doing other fudging on the messurements. Epic could do a better at providing information of incompatible project settings etc. But what it is, is allowing you to put way more polygons on your screen. that's still true. Bringing in an 8th gen game and enabling Nanite won't give you more performance, could even give you worse performance. Also, even today, if you want to start a game project, use 4.26 like they tell you.
@ThreatInteractive2 ай бұрын
Read our pinned comment.
@ADRENELINEDUDE2 ай бұрын
THANK YOU! Finally a proper, logical, grounded video! Your channel is amazing.
@average_ms-dos_enjoyer3 ай бұрын
It would be interesting to see similar breakdowns/criticisms of the other big 3D engines approaches to visual optimizations (Unity, Crytek, maybe even Godot at this point)
@Anewbis12 ай бұрын
Great content, thank you this! One question that comes to mind. The industry has decades of experience with traditional rendering method while Nanite is only a few years old. Do you think it is a factor to take into consideration when comparing? Do you see Nanite in 5/10years being way more efficient ?
@ThreatInteractive2 ай бұрын
Great Question. The only way for for Nanite to improve would mean it would have to be drastically changed to the point where it would be a bit of a cheat to still call it "Nanite". Our video is talking about the same algorithm that has received 5 iterations. Nanite is an implementation of a couple of different concepts such as visibility buffers, mesh compression, and cluster culling but the industry could(already has) meet systems that touch on those same systems for a better performance result. For instance deferred texturing in the Decima Engine. How well Nanite runs or what hardware later in time doesn't matter. The industry has been given it's target hardware(9th gen) and the results we're getting from UE5 Nanite enabled games is a joke in terms of visuals and reasonable potential. If we waste potential now, next gen(10th) and so on will lose value as well.
@Madlion3 ай бұрын
Nanite is different because it saves development cost by saving time to create LODs, its a simple plug and play system that just works
@RootsterAnon2 ай бұрын
Very good video. I was not aware of this!
@emmanuelgonzalezcaseira91413 ай бұрын
I'm impressed with how easily you can teach this subject. Such is that I can clearly understand, as a layman, the problem at hand. If I wanted to learn graphics programming, where should I start? I don't even know how to search for it and chatgpt ain't much of help.
@LabiaLicker3 ай бұрын
look through doom and quake source code
@Clockwork0nions3 ай бұрын
He can teach it easily because he’s dead wrong lol.
@mitsuhh2 ай бұрын
@@Clockwork0nions elaborate?
@TechArtAid2 ай бұрын
But wait - isn't Nanite adding a fixed cost to the frame too? So in that sense, a simple mesh doesn't cost 1.28 ms, because adding another mesh wouldn't give you 100% cost increase Also, I think its advantage is not make things faster (but it's sadly advertised as such- infinite geo). Isn't it, as with all virtualized techniques, to make the cost more predictable and uniform? About mesh topology vs AI - it's one of the hardest problems in the graphics field. It's an obvious goal for many talented people but one that's still eluding a stable solution But I'm all with you against the fake expectations and KZbinrs' misleading "tests". Thanks for giving it a proper profiling treatment & explaining the basics
@ThreatInteractive2 ай бұрын
"But wait - isn't Nanite adding a fixed cost to the frame too?" No, read our pinned comment.
@furiousfurfur2 ай бұрын
I said it on your other video and I will say it again. You need to really look through your script writing. Your structure and presentation of your thesis is better here but can still use some work. But the biggest problem is the attitude you present and your tone and script are both doing you favors in convincing people. For example “a measly 11%” not only is that not necessary to putting measly in there, you also said it with a tone that makes you come across as an edgy teen or from an emotional point of view. You present good arguments. But you also come across as either a conspiracy theorist or as egotistical in having secret knowledge the rest of the industry doesnt have or is lying about. You could win your arguments better by changing for tone and format. And by putting your money where tour mouth is with actual examples. I think getting into the trenches will change your point of view on some these aspects. You have good points, i dont think they are a comprehensive, complete or wholistic view of the industry.
@B.M.Skyforest2 ай бұрын
What many seem to forget is that DLSS and other upscaling things are meant for games to run on slow hardware. And now it's required to be ON if you want to have nice framerate on your top notch PC at ultra settings. It always makes me laugh and also sad at the same time seeing 2010s level of graphics with barely 30 fps on modern machines. We had better looking and faster games back in the day.
@imphonic3 ай бұрын
This kind of thing has been bugging me for such a long time and it's good to know that I'm not completely insane. I'm currently using Unreal Engine 5 to prototype my game, using it as a temporary game engine (like how Capcom used Unity to build RE7 before the RE Engine was finished). My game will be finished on a custom game engine, which I will open-source when I'm finished. I don't want my debut to be ruined by awful performance and questionable graphics quality. I currently target 120 FPS on PS5/XSX, not sure what resolution yet, but all I know is that we're in 2024 and a smeared & jittery 30 FPS is simply unacceptable. I'm not trying to compete with Unreal/Unity/Godot, but I am interested in implementing a lot of old-school techniques which were very effective without destroying performance, while also exploring automated optimization tools rather than pushing the load onto the customers that make my career possible. The neural network LOD system is intriguing, and it might not be perfect, but it might still be a net improvement, so I'll keep that one in mind. Edit: I might not be able to finish the game on the custom engine, and might just have to bite the bullet and ship on UE5. Game engine development is simply super expensive and I don't have graphics programming experience. That doesn't mean it won't happen - it's still possible if I can, say, find a graphics programmer - but I'm no longer comfortable guaranteeing that statement. However, I will be shipping future games on my custom engine to avoid the UE5 problem from plaguing future ones.
@4.0.43 ай бұрын
You're either a genius or insane. Much luck and caffeine to you regardless (we need more rule breakers).
@MrSofazocker2 ай бұрын
Starting a game project in ue5... good luck with that. They even tell you should stay on ue4.26. Seeing you playing more Minecraft from your recent video uploads I call cap. If you just want an Editor, you can stil use Unreal Engine, without the engine part. and write your own renderer pipeline-. No point in rewriting your own graphical editor, asset system, sound system, tools and packaging for PS and XSX... especially getting it to works. I imagine you don't even have a dev console for any of those... so good luck getting one as an Indie dev.
@moncefbkb93533 ай бұрын
I'm not smart enough to understand even half of what you said in the video, but I know that unreal engine 5 has many problems and barely anybody talks about them, so thanks for your work.
@2strokedesign3 ай бұрын
Very interesting tests, thank you for sharing! I was wondering about memory consumption though? I would think a LOD0 Nanite mesh is lighter than let's say LOD0-LOD7.
@Neosin1Ай бұрын
This is exactly what I was telling people, but no one believed me! I taught 3d modelling at an Aus university 15 years ago and back then we modelled everything by hand, which meant our models were very low poly and optimised! Now days, devs just scan in hundreds of millions of polygon models with 1 click and call it a day and expect the software to do all the optimisation! This is why UE5 games run like garbage!
@nVinter3 ай бұрын
I've never been interested in graphics engineering before watching your videos. You're a great communicator and I hope to see you succeed on KZbin, as channels like this are becoming rare.
@StCost3 ай бұрын
Thanks, really helps in understanding how to optimize my game better. If you want to make best performance - gotta make everything perfect manually
@riaayo53212 ай бұрын
"It would have to be free" and thus the unsustainable, artificially low cost of entry for AI products that then do not actually generate profit marches on. I do appreciate the in depth look at nanite's problems, I don't mean to sound sour on that. But AI is in a huge bubble. Companies are going in on them at an already artificially low cost of entry and are *still* losing money. It's just not sustainable, and I'm not sure "make these tools exist so our less powerful gpus have more value because games are more optimized" is a good enough return on investment for AMD or Intel. I'm not saying I know the answer other than Epic just needs to not be sunsetting older, working methods of optimization.
@sylphiannaАй бұрын
dog heard the word AI and got scared this is the kind of thing it is actually supposed to be used for, making an approximation of something we already have, which is the literal main driving force behind LODs, especially because this isn't meant to be a replacement or better version of what we have, but a lower quality one, which is the folly of what companies are trying to do with generative AI; making a shittier version of things when they really really shouldn't be thinking it will be better or more convenient, when they mostly just create more work trying to make whatever the generative AI spit out into something workable, when it would be easier in the end for a human to just make the original asset instead and have a better outcome from the start. meanwhile what the AI suggested here will do is the opposite of all that nonsense in essence
@sylphiannaАй бұрын
tbh the fact we haven't already developed AI that could make LODs before all this gen AI stuff made people scared of the word is a tragedy
@jcm260617 күн бұрын
@@sylphianna I don't see where riaayo said he was scared? He just said releasing a model that can do this for free would be unsustainable, which it is. AI-driven LOD generation is still an open problem trying to be solved, so any attempts to generate LODs with AI currently would be basically lighting money on fire, since the results would be unusable until a breakthrough happens. If you don't even have a revenue stream to feed that fire, then you don't have a model because you can't afford to build, train and deploy one.
@sylphianna17 күн бұрын
@@jcm2606 it is not generative AI so the cost is not so high
@jcm260617 күн бұрын
@@sylphianna ... deep learning in general is a money pit, not just generative AI. Maybe educate yourself on how people actually design and train deep neural networks before spouting shit like this.
@activemotionpictures2 ай бұрын
Thank you! I knew I wasn't going crazy when I started to see dithered shadows, instead of sharp shadows as default in 5.4 Fantastic technical review, mate!
@plamen53583 ай бұрын
Ordered dithering looks and feels terrible, can we please not have that 😓 Awesome video otherwise 🙏
@vitordelima3 ай бұрын
Subdivision surfaces and MIP mapped displacement maps.
@MrSongib21 күн бұрын
Thank god that people like you guys exist. I've wondered about the same stuff before, but as we can see, everyone just praising nanite stuff without any testing it whatsoever. xd
@johnsmith81123 ай бұрын
Excellent explanation of nanite! The comparisons with overdraw were immensely helpful for my understanding as a layman. Looking forward to see how your studio develops. I'm glad to see some knowledgeable pushback against developers relying on DLSS to hide lazy implementations, I never knew how much heavy lifting the TAA solutions were doing to try and hide these issues.
@defeatSpace2 ай бұрын
Life would be so difficult without people like you.
@samiraperi467Ай бұрын
2:51 That is NOT how you write a quarter. What you wrote is a quarter OF A PERCENT.
@crownless2363 ай бұрын
your videos are amazing. just dont forget to spam a bunch of reels with different parts of this video that way your channel gets in the algorithm
@ButtercupVisuals3 ай бұрын
This is some valuable and quite important insights on the entire LOD Vs. Nanite talk. Thank you so much for that. And I think more than the UE dev team preaching why to choose nanite over anything else it's those so called unreal content creators who are kind of diverting the attention from actually helping people create effective systems for LOD and other optimization techniques and more towards fancy features regardless how badly those things scale over large production workloads.
@zsigmondforianszabo46983 ай бұрын
I'd rather think about Nanite as a magic wand for those who don't want to deal with mesh optimization and just want a consistent performance all across without manual optimization. This currently hits us heavily but as soon as technology evolves and everyone is going to have access to modern hardware that can utilize this system, the ease of the system and the decent performance will overcome these hardnesses. About the development: 4050 compared to 1060 has a 70% performance uplift in 7 years. 10% every year including hardware and software development and in 5 years nanite will work out really well for fast game development and consistent performance. PS: we need to mandate gamedevs when releasing a trailer to give performance statistics about the ingame scene and upscaling used :DD
@--waffle-3 ай бұрын
great vid. more please
@ziadrrr_3 ай бұрын
thanks for bringing this to light i have bought a hole new pc and didn't feel i upgraded with how horrible optomizations in games these days
@stephaneduhamel77063 ай бұрын
The point of nanite was never to increase performance compared to a perfecty optimized mesh with LODs. It is made to allow devs to discard LODs for a reasonably low perfomance hit.
@doltBmB2 ай бұрын
if losing 60% fps is "reasonably low" to you
@stephaneduhamel77062 ай бұрын
@@doltBmB it's a lot less than that in most real life use cases.
@doltBmB2 ай бұрын
@@stephaneduhamel7706 Yeah it might be as low as 40%, real great
@Harut.V2 ай бұрын
In other words, they are shifting the cost from devs to hardware (consumers, console producers)
@ben.pueschel2 ай бұрын
@@Harut.V that's how progress works, genius.
@eugene36753 ай бұрын
Keep up the good fight! Great video
@akkihole_3 ай бұрын
i just want to extend my appreciation for your work and especially this video, im a 3d artist and i feel like its important for me to stay informed on these types of things but its always been hard to digest through the information online being so technical in their language, this video helped me understand exactly what ive been wanting to get for a long long time now and i share the sentiment that things like dlss and framegen have been getting a bit over relied on, its nice to finally get a summarized view on nanite on this topic and its misunderstandings explained in a way easy to understand, thank you so much for your hard work
@FVMods2 ай бұрын
Minimalist, quality, straight-to-the-point narration, tidy video editing with relevant information, engaging content. Great channel so far!
@pchris3 ай бұрын
I think easy, automatic optimizations that are less effective than manual ones still offer some value. When a studio has to dedicate fewer resources to technical things like they, the faster they can make games, even if they look slightly worse than they could if they wanted to take absolutely full advantage of the hardware. Every other app on your phone being over 100mb for what is basically a glorified web page should example how dirty but easy optimization and faster hardware mostly just enables faster and cheaper development by enabling developers to be a little sloppy.
@theultimateevil34302 ай бұрын
it's great in theory, but in practice the development is still expensive as hell and the quality of the products is an absolute trash. It's the reason we have a volume control in Windows lagging for a whole second before opening up. The same stuff that worked fine on Windows 95, lags now. Dumbasses with cheap technology still make bad products for the same price.
@pchris2 ай бұрын
@@theultimateevil3430 when you're looking at large products make by massive publicly traded corporations you should never expect any cost savings to get passed on to the consumer. I'm mostly talking about indies. The cheaper and easier it is to make something, the lower the bar of entry is, and the more you'll see small groups stepping in and competing with the massive selfish corps.
@azrielsatan86932 ай бұрын
As a certified UE5 hater from the first few games that released with it, I'm happy to see more discussion about it. The VRAM usage also needs to be talked about. Games requiring a min 8GB VRAM for 60fps is ridiculous (though Nvidia refusing to put any on their cards is also to blame).
3 ай бұрын
Finally someone speaking up about these issues! Thanks for making this video, always detailed and in-depth but still easy to follow, coming from a guy with only basic graphics and rendering knowledge.
@gurujoe752 ай бұрын
I'm not a programmer but I see that for ten years there has been a waiting for a big graphic paradigm. Goodbye classic renderpipeline as they teach it in school, goodbye rasterization, goodbye classic extremely inflexible polygons, goodbye endless problems with shadows, LOD, UW mapping, etc. etc. etc. etc. etc. etc. UE is the only big multiplat engine today. And R@D is extremely expensive. You understand what I'm implying here between the lines.
@matjakobs861217 күн бұрын
All UE5 games run like dog💩. Now i know why😂😂
@ThreatInteractive17 күн бұрын
Not necessarily, I would watch our SH2R video. You'll find it's specific things to the game that kill performance.
@contra76315 сағат бұрын
Also not to mention most ue4,5 engine games feels very fake.Like the environment feels very fake compared to many other engines.
@contra76315 сағат бұрын
Also the texture looks very plastic in ue4,5 games
@AshrindyAnimates22 күн бұрын
THANK YOU, I'VE BEEN ARGUING WITH MY FRIENDS ABOUT THIS FOR MORE THAN A YEAR