[What] Do We Need to Render BILLIONS of Polygons in Real-Time - The ULTIMATE Guide to Nanite

  Рет қаралды 48,157

MARKitekta

MARKitekta

Күн бұрын

Пікірлер: 411
@markitekta2766
@markitekta2766 Ай бұрын
IMPORTANT NOTICE - One question that popped up is whether Nanite’s software rasterizer runs on the CPU or GPU around the 24:35 mark The answer: Nanite’s software rasterizer is GPU-BASED. 🎮 I got my facts wrong, I appologize. It uses compute shaders to handle small triangles (clusters with edges smaller than 32 pixels) and dynamically bypasses the traditional hardware rasterization pipeline. This GPU-based approach ensures performance scalability and keeps the rendering pipeline efficient, even with highly detailed scenes. This design minimizes CPU-GPU overhead and leverages the massive parallelism of modern GPUs, which is a core aspect of Unreal Engine 5’s real-time rendering magic. Thanks again for the support, and keep the feedback coming! If you’ve got more questions, feel free to ask-I’ll dig up the answers. 😊
@miranteazi
@miranteazi 29 күн бұрын
please do more!
@AvalancheGameArt
@AvalancheGameArt 29 күн бұрын
Of course it is GPU, graphics acceleration was born to do that.... Bro stop liking every opnion and study. If nanite rasterizer was on cpu it wouldn't even run in the scale of seconds.. maybe days lmao
@markitekta2766
@markitekta2766 28 күн бұрын
@@AvalancheGameArt Thank you for your comment. It seems I have a lot more to study, but that is OK 😃 During my exploration I stumbled upon one of the earlier implementations of software rasterizer that I understood was on the cpu and hence made the connection that this is the case here. I agree that providing adequate facts is important, but with a community of people correcting one another in a polite manner, I think there is nothing we can get wrong 😃
@AvalancheGameArt
@AvalancheGameArt 28 күн бұрын
​@@markitekta2766 Of course in the beginning it was CPU.. then they had the problem of parallelization, plotting billions of pixels in less than 16ms was kind of impossible, still is, so the gpu was born. Now we have a "swap buffer" architecture. If the cpu is fast enough it does all its things and wait for the GPU to give it the data, otherwhise once it finishes it gets the buffer and displays it. Now everything can be done ina short amount of time even though it created a new overhead for synchronizing cpu and gpu.. In order to have fully raytraced shit we need these companies to work on a new rendering approach, or rather a PURE approach. Mixing rasterization and ray marching which is super expensive is a bad idea and adds up overhead per se.. Then you put bad practices from developers and you have 360p native resolution games that run on 2 fps with a maximum multisample of texture filtering of 8. A few years ago it was 16!!!!!!!
@tirkentube
@tirkentube 28 күн бұрын
@@AvalancheGameArt you're like, if a drunk guy came in to class. you hear the smartest guy in the room make one little mistake during his presentation, and you begin just shouting dumb sht you've heard someone else say before.
@mariprrum
@mariprrum 24 күн бұрын
this went from "I have no idea what he is talking about" to "oh yeah, now makes sense" really quick
@markitekta2766
@markitekta2766 24 күн бұрын
🤣😂😆 That's a good one, I was going for that response 😃 Thnx
@3.2213
@3.2213 20 күн бұрын
Well said. I repeated this video like 30 times and I'm still not confident I understand it.
@markitekta2766
@markitekta2766 20 күн бұрын
@@3.2213 Thank you for the response. If "Well said" is referring to the comment by mariprrum, perhaps I can offer some more context. In the initial presentation of Nanite by Brian Karis, one of the comments was - this went from "oh yeah makes sense to "I have no idea what he is talking about" really quick. I believe mariprrum was making a comment referencing this event but applied to this video how some things can make sense. If you still find certain aspects difficult, let me or the community know and we can see how to help you out. Perhaps you can check out the videos (I placed the titles and creators at the bottom line) for the topics that can seem difficult 😃
@markitekta2766
@markitekta2766 20 күн бұрын
@@3.2213 Also, I agree with you, I made the video, and I'm not confident I understand everything completely, for example I got the CPU software rasterizer wrong, which is explained in the pinned comment.
@AsmageddonPrince
@AsmageddonPrince 10 күн бұрын
This is by far the best video describing what Nanite is and how it works that I've seen so far. It really makes it seem like an incredible technical feat, but damn if the only valid usecase it has isn't salvaging the performance penalty of using meaninglessly bloated mesh complexity.
@markitekta2766
@markitekta2766 10 күн бұрын
Thank you for the words of support 😃 I think your comment really summed up the entire situation nicely. On one had it is a ton of previous experience and results joined together to solve the issue of manual work people do when dealing with high poly assets. On the onther hand, given that we are living in fast-paced times, people usually want instant results, so giving the artistis or the users the possibility to instantly use high poly assets can be extremely benefitial to form a general depiction of the initial idea, but as we have mentioned here throughout the comments, Nanite should not be the only tool in the toolbox 😃
@kilonaliosI
@kilonaliosI Ай бұрын
I chose to go with voxels + vertex colors + auto lods for GODOT, I use a quad per pixel with no uvs and textures. From what I seen with benchmarks, shaders and textures are actually much more expensive than poly counts. So I will see in practice how well that will work.
@markitekta2766
@markitekta2766 Ай бұрын
Looking forward to seeing it in action. I've never used voxelized assets, apart from using it in parametric modeling for 3D printing, but I did not research the use of materials when it comes to voxelized geometry, is it really that much of an issue?
@fudjinator
@fudjinator 29 күн бұрын
24:30 Doesn’t it use a compute shader instead of the traditional pipeline? I don’t think the rasterization is done on the cpu. I’m pretty sure that’s what you meant but it’s a big distinction.
@markitekta2766
@markitekta2766 29 күн бұрын
You are right, I got my facts wrong and misunderstood that the software rasterizer is on the gpu instead on the cpu. I pinned a comment for people to hopefully pay attention, but I'm glad I have so many commited community members ready to share in the knowledge, thank you for that 😃
@hongwuhuai
@hongwuhuai 24 күн бұрын
I am a game developer, but not a graphics programmer. This video is very very well explained, it satisfied my curious mind indeed. I will probably never swith to doing graphics programming, but I like to understand (from a high level) how cool techs work regardless. THANK YOU.
@markitekta2766
@markitekta2766 24 күн бұрын
Not to worry, I am an architect and a designer so am not connected with games or graphics programming at all 😃 However, since architecture is linked to a lot of overlapping fields, I think it is important to understand what is happening under the hood, especially when you want to use different tools and technologies for real time rendering, in this case in interior and exterior scenes in architecture and urban planning. Thank you for the support 😉
@cod3ddy74
@cod3ddy74 12 күн бұрын
I dont understand any of this, but i know i'm gonna need this knowledge one day. thanks for the wisdom shared.
@markitekta2766
@markitekta2766 11 күн бұрын
I'm glad I could help, thnx for the support/ I know how you feel, and this is how I start with any project, but the more I practice it, the more it all starts to make sense. I had a lecture about - Why Study Architecture - where I briefly talked about the studying process, if you want, maybe you can get some insight as to how to retain the information better, but also encode them more efficiently 😃
@LucydDev
@LucydDev 19 күн бұрын
Due to potentially worse overdraw and the overhead it's basically impractical to use nanite unless you're trying to get cinematic level assets into your scene. Nanite generally runs worse on my hardware than traditional LODs and it makes targeting lower end hardware difficult. The problem I (and many others) have had is with Epic thinking it's the end all be all solution to tesselation which it is not. Unreal engine 5 ditched tesselation and now if you want to make things that make great use of tesselation (water, or realistically displacing snow) you have to either use nanite tesselation for landscape (for snow displacement), or start with a high poly mesh and use nanite for LOD (mainly for water.) these solutions run a lot slower on my system than simple tesselation and they don't offer as much flexibility. Nanite =/= tesselation imo and it would be great to have both. Nanite isn't really the end all for optimization like people are touting. It's really just for allowing high poly meshes for games. It's an amazing tech, but gutting tesselation was a bad decision imo.
@markitekta2766
@markitekta2766 19 күн бұрын
Nicely said, in the quest to solve some problems, you create other, there is usually some give and take or basically comprimises to be made. Thanks for sharing your thoughts, they are greatly appreciated 😉
@zoeherriot
@zoeherriot 12 күн бұрын
I don't think you know what Epic are thinking... at all. One of the main purposes of Nanite was to simplify the process of art creation - games are taking so long to make now, because as the demand for higher details has increased, there hasn't been any real advancements in the process of making assets. We are still using high res models to bake details into lower res versions of the asset - which takes more and more time to do. When I started in the industry, a 5000 poly character was unheard of. Now we are making assets orders of magnitude larger. While the overall game worlds are also correspondingly larger. Nanite is a tool that is designed to try and alleviate some of that work load while maintaining high resolution visuals. This is after all, (one of) the purposes of a game engine. Now - people seem to be under the impression that all games coming out using Unreal are using Nanite. This is BS. Nanite is barely even finished as a feature - just because it's in UE, does not mean it's complete, or recommended to use. This has always been the case with UE. The features have to come early, because game dev cycles are so long - that the bet is hardware will be ready by the time the game is completed. And it's up to the devs to investigate whether it makes sense to use the new features (knowing the risk of bugs and performance issues) given the target release date. When it was first introduced, it was intended to be used on future hardware with high performance SSD's and high GPU bandwidth. Fortnite gets away with it - because, let's face it, the game is not particularly high poly. But yeah - it's going to run worse on your hardware - because PC's haven't caught up to the hardware performance profiles that Nanite is intended for. That's why (contrary to apparently everyone's opinion on the internet apparently) - it's not being used in every UE game. Nanite was released as an early access feature in 2021... many of the games that began development during 2021 are STILL IN DEVELOPMENT. Most studios that started development in 2021 would have thought very hard indeed about using a tech that is not guaranteed to be complete or fully stable by the time their game releases.
@LucydDev
@LucydDev 12 күн бұрын
@@zoeherriot You are assuming a lot of stuff that I did not say in my comment. I understand it's not finished and is a new tech. I understand it's to make the artist workflow easier. And yeah, not everyone uses it/needs to use it. My issue is completely gutting tessellation in favor of nanite. Which FORCES me to use it because I planned on having features that take use of the more edge cases of tessellation (real time snow displacement.) They are trying to bridge the gaps that nanite didn't cover (and tessellation did) at release, but it's just not there yet and I am stuck in a place where using nanite to accomplish these cases tanks my performance. I want to target/support older hardware as games like RDR2 had real time displacing snow that ran on last gen performantly. Nanite is an amazing tech, it's just that 1. it's not quite there yet and 2. gutting fully stable features for experimental ones doesn't make much sense to me. They could have kept tessellation in until nanite was finished/actually covered all of the use cases. I also stated that OTHER people seem to think nanite is a performance solution/solution to LODs. I understand that it was never meant to be since both still exist, but instead was to simplify workflows and allow high poly meshes to be used. I was a bit heated about tessellation being gutted so I might not have communicated that properly in my original comment, but I am not one of those people. Tell me, if nanite was experimental and not meant for production on release of UE5, why did they get rid of tessellation on release of UE5? It makes no sense to me. I understand Epic are making decisions that they think will benefit the artists and developers that use their engine, and that nanite will do so. But gutting tessellation, while effecting a small edge case of users that don't just use tessellation to get small details on meshes, still cuts down the tools that developers have and affects these edge case users
@zoeherriot
@zoeherriot 12 күн бұрын
@@LucydDev The usual approach is to use an older version of UE that has the features you need. I'm currently still shipping a game written on 4.24. I know a lot of indies like to use UE, but it's really designed for big studios. We have the luxury of writing components or entire features - or if there is an issue, I can go have a beer with the engineers from Epic because I live near them. Or you know... use the incredibly expensive UDN. However, I usually work on games where we make our own engine - and people say "why do you write your own engine, when UE exists". Well... this is why. Ironically, we end up having to build many of the tools that exist in UE because artists demand them, but at least we can control the direction of the engine. There is no straight forward answer unfortunately. It is entirely possible UE is just not right for your use case.
@LucydDev
@LucydDev 12 күн бұрын
​@@zoeherriot I understand. At this point for me it's too late as I started when UE5 was fairly new and thought the features would work well. I was under the impression that I could work around and or solve these issues and continued on my project. I have a few ideas to pivot away from the use of nanite tessellation which would give me even better flexibility but at the loss off visual fidelity/believability (using parallax occlusion mapping. Flexibility because using nanite landscape tessellation is a pain in my engine version, but also in general from what I hear in other versions.) It's sad because I now wish I did start with UE4, not only because of lack of tessellation, but because of other features that were deprecated and removed (mainly GI options. UE5 only has SS (experimental), Lumen, or baked. UE4 has much more. It would be nice to test how those other GI options would work with my project as I often get artifacts with Lumen as well as some performance loss)
@ThadeousM
@ThadeousM Ай бұрын
Thank you for your time spent here friend! Both teaching me some new things like the clockwork logic of screen space reflections. Whilst helping to reinforce information I had already digested.
@markitekta2766
@markitekta2766 Ай бұрын
I'm glad you found value in it. Was there anything about screenspace reflection here or was it a part of some of the techniques?
@ThadeousM
@ThadeousM Ай бұрын
​@@markitekta2766 its was the comment around 19:41 during the "Backface Culling" section. Explaining that we can check the direction of a face by indexing a verts in a clockwise fashion. - If we check and they are appeaar clockwise we're looking at the front. - If theyre counter clockwise its the back. This is how Screenspace works
@markitekta2766
@markitekta2766 Ай бұрын
@@ThadeousM Got it, thank you for the clarification and for letting me in on a new thing I can explore - screen space reflections, when I embark upon the journey of exploring light, materials and shadows 😃
@tbird-z1r
@tbird-z1r 22 күн бұрын
Museums should have multiple different LODs depending on how far away the sculpture is.
@markitekta2766
@markitekta2766 21 күн бұрын
😃 That's an intersting suggestion. Luckily, the sculpture depiction changes based on how close we are to it, so actually we create the different LODs in our head based on proximity. For each receptor in our retina, we can only get one piece of information, similar to how Nanite employes one triangle per pixel, I guess
@realdragon
@realdragon 18 күн бұрын
Don't worry, I have build in system to blur out details at longer distances
@nameinvalid69
@nameinvalid69 19 күн бұрын
my 2 brain-cells honestly cannot comprehend this crazy alien tech. But I appreciate it exists. I'm no pro but just do some random 3D stuff, the ability to just put things down and "IT JUST WORKS" without all the boring steps is amazing to have. more time for creativity, less time dealing with the soul crushing stuff.
@markitekta2766
@markitekta2766 19 күн бұрын
I understand what you are saying. 3D artists usually just want to deal with the creative stuff and not worry about how the tool works. But sometimes, knowing what happens under the hood allows the artists to create something that is both aesthetically pleasing and also cheap performance-wise. Either way, thanks for commenting and participating in the discussion 😃
@ManthosLappas
@ManthosLappas 8 күн бұрын
As an Artist and indie dev myself, who creates large potion of this assets, I have to relate and agree. However, it would be great for us to indeed understand more on how things work under the hood. Similarly, I have technical issues and limited understanding to grasp all that myself 😂 (especially being an Artist primarily).
@markitekta2766
@markitekta2766 8 күн бұрын
@ Thanks for sharing, I'm glad we have someone who can atest to it from a practical application background. 😃 When they came up with a very short video explaining what Nanite is, they said nothing about the technical parts, only that they wanted to give the artists more freedom in handling the scene assets. And, as a result, when aartists use it they get empirical results, or those based on the trial and error experience, but understanding what happens under the hood drove me to make this lecture to have a better grasp on both. 😃
@mushudamaschin2608
@mushudamaschin2608 Ай бұрын
Thanks a lot for the overview! What's your opinion on using nanite to render those 30 million poly statues, versus rendering them with traditional lods, view space culling and gpu instancing? Another question: how do you control how dense the "merged" mesh nanite uses for the one geometry draw call? Is there a theoretic maximum depending on the rendering resolution, and is it guaranteed to be handled by the GPU?
@donovan6320
@donovan6320 Ай бұрын
@@mushudamaschin2608 realistically, that one contrived example would probably run just fine on modern GPUs. Do note that instancing and LODs are mutually exclusive. LODs create different geometry, instancing only works for rendering the same geometry over and over. However, realistically if you instanced the models based on a couple of lods, or batched the models all together as they share the same material and would presumably be static. Especially if you were aggressive with LODs because realistically you don't even need a million polygon statue close up.That kind of detail you should probably bake into the model, It makes the model much cheaper to render and baking the data into the model texture maps aren't going to decrease quality all that much. In any case, it would probably run fine even on a traditional renderer. That isn't to say nanite is worthless, however, I'm pretty sure it would be better used for VFX and cutscenes than actual games. The high performance penalty isn't a big downside when the alternative is a 10x performance drop compared to nanite As for the merged mesh, there's probably a bias somewhere. Finally, yes, it should be handled by the GPU as any GPU that couldn't handle that calculation also by necessity could not handle nanite. Nanite from what I understand relies on compute shaders. Which would also be how it's handled on the GPU.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you mushudamaschin2608 for the clarification. I think that instancing helps in lowering the complexity in the scene from the point of locating the clusters in the tree or the DAG. For example, all these 33 million polygon statues were used as isntances and placed at the exit in the Nanite in the Land of Lumen Demo, as was discussed by the creators. But as is stated, when you have large occluders with a huge amount of polygons, like photogrammetric or movie est asset quality, sometimes you just want to use them, without manual prep work, like normal baking and that's where Nanite should shine. Someone raised a point of asking are there any games that use Nanite, apart from Fornite?
29 күн бұрын
There is a video from Threat Interactive that shows some downsides of carelessly using nanite instead of traditional lods and other methods. It mainly focuses on why most games using nanite today have absolutely abysmal performance , but it also demonstrates quite well how and when to use nanite, and when not. kzbin.info/www/bejne/g2GTdXqgdrVgo7c
@OverJumpRally
@OverJumpRally Ай бұрын
Great video! I wish you talked a bit about VSM and how the elements outside the frustum can still cast a visible shadow in the scene.
@markitekta2766
@markitekta2766 Ай бұрын
I wanted to tackle the aspect of lighting and shadows in a different video, but it is a great topic and I can't wait to research and share what I've come up with. Thank you for the suggestion and for watching :D
@wedusk
@wedusk Ай бұрын
Thank you for the very nice video. One minor point to note. You mentioned software rasterizers being used in Nanite for culling microtriangles running on the CPU. Yes, although these software rasterizers could run on the CPU, in practice they are almost always run as regular compute shaders on the GPU bypassing the fixed function raster pipeline.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for pointing it out. Yeah, I noticed I got my facts wrong, so I pinned a comment with the correction. I appreciate you bringing this to the attention to the community here 😃
@SmellsLikeRacing
@SmellsLikeRacing 20 күн бұрын
Awesome video, I knew some of the stuff, so it was very satisfying each time when you delivered the topics that I hoped that this video covers.
@markitekta2766
@markitekta2766 20 күн бұрын
Thnx for the support, glad you enjoyed it 😃 When I started exploring the topic, I didn't know any of the stuff, but during the studying process, I ran into the satisfying portion you mentioned. It was really nice when you hear something you are already familiar with, explained in a different way and they you get a much broader perspective and deeper understanding. Sort of how the community members share their experiences and help us grasp the current state of using these tools 😉
@Mega-wt9do
@Mega-wt9do Ай бұрын
Wow, this is all really well explained
@markitekta2766
@markitekta2766 Ай бұрын
Thank you very much, there is a saying - If you can't explain it simply, you don't understand it well enough. Even though it lasts for 28 minutes, hopefully it is understandable 😄
@rabellogp
@rabellogp Ай бұрын
Nice overview, man! From time to time I rewatch that Siggraph talk trying to understand it a bit better... It's a really complex talk that requires a lot of background knowledge to grasp... Just one observation. The software rasterizer happens in GPU, not CPU. It differs from hardware rasterizer by utilizing compute shaders instead of the dedicated raster bit of the graphics pipeline.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for providing feedback. Yeah, I came back to the lecture every once and a while, hoping that 1x speed and really committed listening would help me understand. But I was lacking the terminology and the experiences I found later on. However, still I can watch the lecture and not get everything, so many nuances. For the software rasterizer on the GPU instead on the CPU, it seems I got my facts wrong about them, or I implied to myself that if this was hardware, software goes here, bassed on prior information that software is tied to CPU. Taking you very much for pointing it out, much appreciated.
@JanPatatoes
@JanPatatoes 10 күн бұрын
We are using a similar tech at work and our TDs are saying that everything that's using our "nanite" is cost effective when you have a fair amount of geometry in it, nothing crazy tho, enough to generate good clustering. So I wonder, why in your opinion should we not use it everywhere we can ? (so mostly static geometry, crafted to avoid overdraw as much as possible), you seem to think it is not worth it on models under millions. But regardless of potential overdraw issue, this has eliminated the need for LODs creation (which was mostly automatized but subject to retakes in some cases or more carefully crafted on others) and that is quite a win especially for the poping issues.
@markitekta2766
@markitekta2766 10 күн бұрын
Thanks for sharing your thoughts, it is much appreciated 😃 I agree with you, using Nanite avoids having to make the LODs manually, which can take a lot of time and it also improves the performance, when you have a lot of polygons in a scene. Based on the cases I saw, Using Nanite in low poly scenes makes little to no difference when compared to the traditional pipeline. If I understod correctly, the process of generating the LODs for a low amount of triangle and basically displaying a similar amount does not increase performance but can even diminish it. I amidst finishing a comparisson video that should show this in action and I would like to hear everyone's opinion on it 😃
@mohammaddh8655
@mohammaddh8655 4 күн бұрын
bro your videos are just in another level of knowledge thanks for sharing
@markitekta2766
@markitekta2766 4 күн бұрын
I'm glad you found it useful, I wanted to follow a more problem occuring and solving approach, asking the question why we need something, instead of splurring facts. That's usually how it goes in life and why it is relateable 😃
@onedeadsaint
@onedeadsaint Ай бұрын
i guarantee that this will be used in classes. if you're one of those students, hey! you better be reading this _after_ the video is over.
@markitekta2766
@markitekta2766 Ай бұрын
I made it to be seen and used for learning in classes so the editing is not as it could be, with all the bullet points taking up a huge portion of the screen. I read the comment and hopefully other people will as well :D
@migueldetailleur7637
@migueldetailleur7637 14 күн бұрын
Your content is absolutely fascinating and incredibly well done! I really appreciate how you explain complex topics in simple, everyday language, making them accessible to everyone. The use of clear visuals to break things down is especially effective! A couple of suggestions to make it even better: - Consider uploading your videos in 2K or 4K for an even more polished viewing experience. - Slowing down your speaking pace just a bit and incorporating more short pauses between sentences could enhance the viewer experience. It gives us more time to absorb the information as we listen, watch, and process everything. Keep up the amazing work, you're doing a fantastic job! 💪💪💪
@markitekta2766
@markitekta2766 14 күн бұрын
Wow, thank you for the kind words of support and your advice. 😃 I'll be sure to take it into consideration moving forward. While on the note, is there a way to record a 1080x1920 screen and have it in a larger resolution? I understand doing this with a camera, but not sure how I would go about doing this during screen recording. The thing about the talking rate is a great advice, but it can impact the duration of the video, which may deffer some people from watching if it is too long. But if it makes it more comprehensive, which I agree with is important, then I'll try to implement that 😃
@croopercrat
@croopercrat 6 күн бұрын
24:37 dont you mean the GPU here? The software rasrerizer makes use of gpgpu compute support? Edit: my bad, missed the pinned comment 😅
@markitekta2766
@markitekta2766 6 күн бұрын
Yeah, sorry for the mistake, and thank you for bringing it to my attention 😃
@kuzetsa
@kuzetsa 29 күн бұрын
Thank you for this. The hardware side of this has come a long way since the '90s when even hardware texture and lighting on the GPU was considered innovative.
@markitekta2766
@markitekta2766 29 күн бұрын
I'm glad you found it useful. Yeah, the more I research about this topic, the more I start to know about how much I don't know. So still a long way to go to learn everything that has happened since the '90s and who knows what they will strive towards next 😃
@josiahmos5880
@josiahmos5880 Ай бұрын
Very interesting. I use Nanite for foliage as well. It removes pop-in and performance in the case of very large numbers of trees etc performs better than non-nanite for some reason. The only problem is that when trees become the size of about a pixel it looks like watching an old TV that doesn't have signal, just a bunch of noise. Somehow distant adjacent objects need to be combined for smooth distance rendering.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for sharing the info, this is really useful. I saw the feature - preserve area that should help with that. Also, Nanite has evolved allowing for tessellation now, even though it was not possible back in 2021. So, yeah, I guess they will handle these things as well. Can you share some snippets of the performance when using Nanite on trees?
@Paratzi
@Paratzi Ай бұрын
I originally watched this video to see how our reality could potentially be polygons.. but ended up learning about things I'll never use 😅
@markitekta2766
@markitekta2766 Ай бұрын
Yeah, sometimes the things that we think would be useless, seem to help us gain an interesting understanding of our surroundings later on ;-)
@dkoerner
@dkoerner 28 күн бұрын
Awsome video! Thanks for making it. Really helped filling some of the gaps I had with the original presentation. What is still missing somehow for me is details on how the visible set of clusters is determined. I understand that this happens on the GPU which means that there is some DAG traversal done on the GPU? How would that work? Further if clusters are deemed visible but they are not on the GPU yet, how is it communicated to the CPU to upload the missing clusters and update the DAG on the GPU?
@markitekta2766
@markitekta2766 28 күн бұрын
That is a great question and one I would also appreciate someone helping us answer. I had the same question while researching this topic, but only found that they strive to create clusters, both initial and the subsequent ones, that have a minimal boundary. I think there is some mention of this in the presentation pdf here advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf and here is the quote QuickVDR saw that the goals of grouping could be optimized for directly. In terms of building the binary tree they realized graph partitioning could optimize for minimal shared edges. But funny enough for deciding dependent nodes, which was their solution to locking boundaries, they did a greedy optimization with a priority queue disconnected and after building the binary tree through graph partitioning. Thank you for the support, I'm glad it was a nice addition to the lecture, but there are a lot of knowledge gaps before I can completely understand that lecture 😃
@MCSteve_
@MCSteve_ Ай бұрын
Amazing presentation! It really sets the stage of information for standard techniques along with the details with ninite, very lovely. I don't use Unreal so I have a question, but I am curious of the technology. Does Ninite allow for tweaking of the number of polygons per cluster, or how "deep" the tree goes of clusters? Or is the technology more so static in editor? Just curious of how it would impact performance. Glad you share use cases, again great work.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you very much, I'm glad you found it insightful, even as someone who, as you said, is not an UE user. I think you can set the numbers for the error and based on that the clusters will form, but I believe they always strive for those 128 triangles following the logic of 128 pixels for virtual textures, which in this case is where Nanite wants triangles to be the size of pixels, I guess
@SkyMatter00
@SkyMatter00 Ай бұрын
Thanks for the work you put in this !
@markitekta2766
@markitekta2766 Ай бұрын
You are welcome, I'm glad you found it useful :D
@grantkimbell6797
@grantkimbell6797 29 күн бұрын
watched the whole thing, great explanation!
@markitekta2766
@markitekta2766 29 күн бұрын
I'm glad you found use in it, that was my main goal 😃
@aoeuable
@aoeuable Ай бұрын
5:10 Now that Blender version is a blast from the past
@markitekta2766
@markitekta2766 Ай бұрын
Yeah, that slide does not showcase what I am talking about properly, but it does look older, thanks for pointing it out :D
@iamth3r00
@iamth3r00 Ай бұрын
Excellent explanation of such a complex topic. Kudos to you, got a new subscriber!
@markitekta2766
@markitekta2766 Ай бұрын
Very much appreciated, I tried to find simple examples but most of all, putting them into a coherent narrative that people can track and engage with. Thnx 😉
@simonmeszaros2770
@simonmeszaros2770 20 күн бұрын
The most striking fact about how homans perceive detail it that its only limited to focused area. In CGI we are fighting with something human brain solved by filtering out 90 percent of information and process detail only one at a time piece by piece. In generated content we have luxury to use dpeth of field which seem natural and especially has become a part of film language. Level of detail is aimilar to depth of field, these two can blend. Difference is that based on form of presentation level of detail needs to be interactive or partially interactive as it doesnt follow script. Or could it? Maybe there is anothwr way to see it. What may be blurred can still hold the scene in context without too much detail.
@markitekta2766
@markitekta2766 20 күн бұрын
Great observation and analysis, thank you for sharing. 😃 A quick follow up - I think foveated rendering is close to incorporating the human aspect of perceiving the world around us in portions of sharp and blurred details, if I understood your comment correctly. If anyone is not familiar - foveated rendering is a technique that, basically, uses eye tracking technology, in a VR headset mostly, to diminish the rendering workload by reducing the image quality in the peripheral vision.
@chrissachs7713
@chrissachs7713 29 күн бұрын
Excellent video! Very well paced. You covered a ton of ground clearly and concisely-not an easy feat, especially for topics like this. I can't wait to see your video about lumen. Subscribed.
@markitekta2766
@markitekta2766 29 күн бұрын
Yeah, I was striving towards that and honestly, learning about all these things was not the hardest part, but coming up with a structure that has adequate pacing, when you provide the reasons for it, present the problems and ideas instead of just facts. I'm glad you noticed all these things and I can't wait to dive deep into Lumen. Thnx for the support 😃
@aderitosilvachannel
@aderitosilvachannel 19 күн бұрын
Maybe somewhere in the middle-range future, eye tracking tech could be used for high performance rendering in games. For example, something that renders with high detail only the areas where the user is looking at, and renders blurry everywhere else. This would not be good for recording gaming videos, but could be great for players.
@markitekta2766
@markitekta2766 19 күн бұрын
I think someone commented on that a day or two ago, and I mentioned foveated rendering, which is, I belive, what you were referencing? As you said, great for performance, but requires a more beefy setup and not so great if you want to record the video :D
@aderitosilvachannel
@aderitosilvachannel 19 күн бұрын
@@markitekta2766 Indeed. I didn't read the previous comments, but it's nice to see that the video is creating this flow of ideas. A few months ago, I don't remember where, I saw a documentary about vision, where a researcher was demonstrating an experiment that uses eye tracking and blurs an image at the area where the viewer is looking at. The nice thing was that, for the person, it seemed like the whole image was blurry, although most of the image was detailed. The opposite approach would be nice for rendering, I believe, and I think is already feasible today. However, most people don't have eye tracking capable devices yet, although I believe it could be a huge thing in the future for making highly dynamic UIs and very cool things in gaming.
@markitekta2766
@markitekta2766 19 күн бұрын
@@aderitosilvachannel Very astute observation and since the peripheral vision is, let's say, blurry, it is only logical that if you blur the central vision as well, everything looks off. But finding an optimal solution is a marathon and not a sprint so it will definitely take some time to show 😃
@mariovelez578
@mariovelez578 23 күн бұрын
This was a really great video
@markitekta2766
@markitekta2766 23 күн бұрын
I appreciate the comment, thank you for the support 😃
@mk2k10
@mk2k10 17 күн бұрын
Excellent work!
@markitekta2766
@markitekta2766 17 күн бұрын
I'm glad you found it useful 😃 Thank you
@liquos
@liquos Ай бұрын
I still don’t quite understand the occlusion culling bit - specifically: The Z buffer of the previous frame is used to do a first culling pass, makes sense.. but then you end up rendering the Z buffer of the current frame to do a second pass of culling.. aren’t you just rendering the scene at that point? Like to generate the Z buffer of the current frame, wouldn’t you basically have to push every single triangle to the screen? Only to then cull a bunch of them and AGAIN render all those triangles?
@donovan6320
@donovan6320 Ай бұрын
@@liquos I can actually explain that. So on the GPU you aren't required to use a full pixel shader and the GPU has a fast path for depth assuming you don't do anything funky with the pixels. (Like discarding them or manually setting depth). It's usually referred to as Early Z. Because of this, much less state gets changed, if any at all. The actual drawing to a framebuffer is extremely fast.
@donovan6320
@donovan6320 Ай бұрын
It's shading, blending, post-processing, texturing, GPU state changes. All of those eat up performance. A depth pre pass does none of those. It is only for opaque geometry, alternatively geometry that has transparency enabled for say grass billboards. (But this actually oftentimes permanently disables early Z optimization until the buffers get swapped, so you would do this in a separate pass afterwards as it's less efficient.)
@markitekta2766
@markitekta2766 Ай бұрын
If I understood correctly, you need a certain amount of time to create the Z buffer and a certain amount of cache memory for it. If you wait for the GPU to create the Z buffer you are creating overhead, meaning there is idle time between the CPU and the GPU. So, in order to solve this, we take the previous frames Z Buffer and use that to test all the bounding volumes of the meshes that were determined to be the occluders. Since the occlusion query about what is an occluder can take a lot of time, if you test everything, this cuts down on that portion. When you take the previous frame's Z buffer, but at a lower resolution and the current bounding volumes and compare it, you can create a rough estimate of the current frame's Z buffer and then just check the new meshes or the meshes that became visible in the current frame, and test their bounding boxes for occlusion. There is a short text here, medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501 and here www.nickdarnell.com/hierarchical-z-buffer-occlusion-culling/ I think I got a good understanding of it, but would like if someone can clarify or correct me if I am wrong. :D Sorry for the longer post
@liquos
@liquos Ай бұрын
@@donovan6320 thank you very much for the explanation guys, very insightful
@michioyukihyou1403
@michioyukihyou1403 25 күн бұрын
fantastic video! thank you for your hard work !
@markitekta2766
@markitekta2766 25 күн бұрын
I'm glad you enjoyed it, it took a lot of time and effort to put together, so thank you for watching. 😃
@Meltyhead
@Meltyhead 13 күн бұрын
We also don’t perceive things well in shadows. They can also be “crushed” a bit.
@markitekta2766
@markitekta2766 13 күн бұрын
Thnx for sharing that. 😃 I also found information that people cannot be certain of reflection, especially when they are smudged so there is possibility to blur or fake it there as well
@Rosenio
@Rosenio Ай бұрын
Awesome work man, thanks for sharing the knowledge.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for watching, I'm glad you found value in it :D
@gabrielkwong1878
@gabrielkwong1878 Ай бұрын
Amazing explaination, I can understand it, thank you!
@markitekta2766
@markitekta2766 Ай бұрын
I'm glad you found value in it. In trying to prepare the lecture I sometimes spent many hours on a single slide, trying to understand all the nuances that are behind it and verify it from several sources. But hearing that other people can follow my train of though really makes it worth while 😃 Thank you
@adarshwhynot
@adarshwhynot Ай бұрын
Your explanation is fantastic and I am eagerly awaiting your future explain videos
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for the support. The next logical one would be about lighting, but it will definitely take some time to explore and prepare. Until then, maybe you can check out the video about virtual texturing, which was basically the concept behind virutal geometry, like Nanite, or about optimizing the pipeline?
@mouloudagaoua
@mouloudagaoua 29 күн бұрын
good job this video is a gem!
@markitekta2766
@markitekta2766 29 күн бұрын
Thank you, much appreciated 😃
@Meltyhead
@Meltyhead 13 күн бұрын
Congrats, you created vector Pointlism.
@markitekta2766
@markitekta2766 13 күн бұрын
That's an interesting take on it. In some cases it can look like that, if you go to the very basics, but still fairly different 😃 You can always elaborate what you meant, since I'm not sure how you combine the two?
@BogusMogus123
@BogusMogus123 Ай бұрын
Imagine showing this tech to Michelangelo
@markitekta2766
@markitekta2766 Ай бұрын
He'd probably say, OK so you can digitize sculptures now, let's do that, I'm better with a chisel than with a mouse 😀
@TomLis-u3o
@TomLis-u3o Ай бұрын
Michael Angelo :D
@NicolaFerruzzi
@NicolaFerruzzi Ай бұрын
I hate to be that guy .. but it's spelled Michelangelo Buonarroti (thanks for fixing it)
@BogusMogus123
@BogusMogus123 Ай бұрын
@NicolaFerruzzi It's way different in my country. His name is "Michał Anioł" in polish, which translates to Michael Angel, hence the mistake.
@timmygilbert4102
@timmygilbert4102 Ай бұрын
​@@markitekta2766I think he would be amaze because it's moving image, the legends say he angrily smash his hammer saying "now move" after finishing his moses, he wasn't just trying to do life like sculpt, he was trying to make life, he was pissed that even achieving perfection would bring sculpture to life
@SuperRed126
@SuperRed126 Ай бұрын
but nanite is performing much slower, wasn't it proven by threat interactive? nanite in case of performance is useless
@markitekta2766
@markitekta2766 Ай бұрын
If I understood correctly, it depends on when you use it. Under a specific polygon count number, it has diminishing returns, meaning you lose more time to prepare everything and render, when compared to the traditional pipeline. The entire Valley of The Ancient Demo wouldn't work with a traditional pipeline, where you had a room with 500 sculptures, each one having 33 million polygons. But for a simple architectural building or urban site with little details, Nanite would not produce better results. If you have additional resources, please share them here, I'd like to find out more about the topic :D
@FUnzzies1
@FUnzzies1 Ай бұрын
Don't listen to that moron. Threat Interactive is the farthest thing from an authoritative source on the subject.
@Igivearatsass7
@Igivearatsass7 Ай бұрын
@@markitekta2766 It's not about polycount, it's about overdraw. Threat Interactive showed a 6 million poly mesh running 2x faster without LODs or Nanite. It gets to the point where all the Nanite details just become noise which needs TAA...which just blurs the detail anyway. Nanite is just a tool for devs who don't want to bake details.
@markitekta2766
@markitekta2766 Ай бұрын
@@Igivearatsass7 Thanks for the clarification. The things I gave here are based on theory, if there are practical examples that show otherwise, we should jusst implement a scientific approach perhaps. But 16 billion polygons cannot go through a classical pipeline, right? Also, overdraw happens if you kitbash a scene or use agregate geometry, otherwise, the culling process removes the unwanted clusters?
@126sivgucsivanshgupta2
@126sivgucsivanshgupta2 Ай бұрын
Threat interactive is not a good source for technical details, he has no background in computer graphics and has been trying to do 1-1 comparison when things aren't 1-1 comparable. I know this because I am very active in a graphics programming discord server and we have seen him beg for any numbers that may support his claims. Nanite overall scales better than non nanite tendering, if u try to render a few 100 thousand meshes, nanite won't perform as well, nanite shines when you wanna render millions of meshes.
@TeamDman
@TeamDman 21 күн бұрын
Very nice explanation, thank you!
@markitekta2766
@markitekta2766 21 күн бұрын
Glad you found it helpful, I'm always trying to present it in an easy-to-digest way. 😃
@TeamDman
@TeamDman 21 күн бұрын
@markitekta2766 very commendable information density and clarity, adding to my list of reference videos 🧑‍💻
@markitekta2766
@markitekta2766 21 күн бұрын
@@TeamDman I appreciate that, thnx
@MarkJohnsonMJ-i8i
@MarkJohnsonMJ-i8i Ай бұрын
Thank you for another great video, much clearer now after introducing all the major concepts 😉. One question though - is the occlusion volume used for object based occlusion and bounding volume used for image based occlusion - like Z buffer?
@markitekta2766
@markitekta2766 Ай бұрын
Great observation. That is my understanding as well. In one case, we use a occlusion volume, which is snuggly fit inside the boundaries of our mesh so we can always be sure that we are not occluding more than necessary. In the other case, when we are using bounding volumes, we want to check the entire mesh to know if even one part is visible or occluded so we can overlap it with the Z buffer. Thank you for bringing it out 😀
@kthxbye1132
@kthxbye1132 Ай бұрын
real good explanation.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you, that means a lot, I was looking for a way to make something that was complex to me, simple and relatable, glad it came through 🙂
@jacekb4057
@jacekb4057 Ай бұрын
Great video, thank you! A lot of science involved behind a game engine.
@markitekta2766
@markitekta2766 Ай бұрын
Sure is, kinda like not knowing what happens under the hood of a car, yet it still goes if you know how to use it :D
@MissPiggyM976
@MissPiggyM976 24 күн бұрын
Great, thanks!
@markitekta2766
@markitekta2766 24 күн бұрын
You're welcome and thank you for checking it out.
@saturn7_dev
@saturn7_dev Ай бұрын
Was very good summary video. There is also far plane clipping (culling) not done by Unreal - with exclusions for permanent objects. I'm trying to get this exact thing to work myself in another engine.
@markitekta2766
@markitekta2766 Ай бұрын
Yeah, there was also portal culling which is really an interesting thing, even though it can require manual setup. I found a lot of understanding in the series from thebennybox here kzbin.info/www/bejne/bqnKk2CQmL-Jb9Usi=yWWAG79ulUeRl8ry
@billgates3699
@billgates3699 28 күн бұрын
‼️Good job. That’s a lot of words.
@markitekta2766
@markitekta2766 28 күн бұрын
Sure is, but somehow it always ends between 25 to 30 minutes of non-stop talking 😃 Thank you
@willianfrantz8009
@willianfrantz8009 21 күн бұрын
great video explanation
@markitekta2766
@markitekta2766 21 күн бұрын
Thanks for the support, I'm glad you found it useful! 😃
@BulletForceKngz
@BulletForceKngz 7 күн бұрын
It is too soon to be using film grade tech. The game console can barely keep up a 30fps on top of that we are also heading a bit back in terms of graphic since we are sacrificing texture for GI, nanite and whatever else stuff we shouldn't have.
@markitekta2766
@markitekta2766 7 күн бұрын
I understand what you are saying and thank you for sharing 😃 I think you are right, making games should definitely be optimized in a much greater scope than just giving artists the tools like Nanite and Lumen, something perhaps they do not fully understand how to use. However, if you want to use photogrammetric models of interior or exterior of buildings, the combination of Nanite and Lumen, with its GI is something that can contribute to a better experience. So in a sense, it is a tool that is not to be applied at its best in all situations.
@macilvoy
@macilvoy 11 күн бұрын
This is gold
@markitekta2766
@markitekta2766 11 күн бұрын
I'm glad you found it useful 😃 Thnx
@mrshodz
@mrshodz Ай бұрын
Such a great explanation.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you, I really tried to explain it to myself, and I'm very picky about all the nuances... At least up to a certain point :D
@Epicdudebro
@Epicdudebro 6 күн бұрын
Great video
@markitekta2766
@markitekta2766 6 күн бұрын
Thnx for the support, glad you found it useful 😉
@KynesisCoding
@KynesisCoding Ай бұрын
Enjoy the vid, watched the whole thing even tho I don't render complex stugg
@markitekta2766
@markitekta2766 Ай бұрын
I'm glad you found it interesting, I also find myself listening and learning about things I don't use generally in life. But everything connects in the end, I guess 😉
@Mezurashii5
@Mezurashii5 3 күн бұрын
So what I'm hearing is Nanite is pretty niche and more of a beta version of what might actually become useful to the majority of devs in the far future.
@markitekta2766
@markitekta2766 3 күн бұрын
Thnx for sharing your insight. 😃 I think it is very interesting how all the community members hear something different about Nanite or have their own interpretations. Even though that can lead to a lot of polarized opininons and harsh debates, I think that is the greatest thing we have when it comes to expanding our perspective and takes on different things. I think you have a point, Nanite seems to be niche in the sense that it has certain limitations for its use and that it can solve a certain set of problems, but not all of them, so it's important to use it according to what the requirmenets and specs are, I guess 😃
@Mezurashii5
@Mezurashii5 3 күн бұрын
@@markitekta2766 Well that's the thing, the limitations of Nanite seem to be set in stone. You can always get some extra performance with extra effort with traditional techniques, but as I understand it, Nanite just kinda has overdraw that you have to live with unless you're willing to completely alter your scene (which isn't a solution imo)
@markitekta2766
@markitekta2766 2 күн бұрын
@@Mezurashii5 I understand your point of view and based on my research I'm not sure that Nanite inherently has the overdraw issue, just when the geometry is such that it overlaps, talking about opaque materials. And yeah, changing the way scenes are created in order to have these benefits, imo, does not enrich the already fruitful basket of different tools and approaches that are present within the artistic and developmental side of the computer graphics community 😃
@bboysil
@bboysil 20 күн бұрын
excelent video
@markitekta2766
@markitekta2766 20 күн бұрын
I'm glad you liked it, thanks for the support 😃
@anuplonkar2198
@anuplonkar2198 Ай бұрын
That information is really important. Thanks for making it
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for pointing it out 😄 I also believe that these things are important, but not because you get the knowledge, but because you also gain a perspective as to where it all came from and where it can possibily go.
@TommyLikeTom
@TommyLikeTom Ай бұрын
I don't understand why people obsess over David when the Egyptians were sculpting figures 5 times bigger thousands of years before. There were statues called Colossi, some were made of bronze. It seems to be some strange bias towards modern European culture. Do you really think ancient people were incapable of sculpting or observing veins and muscles? It's so silly.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for pointing it out and sharing your opinion :D I just needed an example for the intro and I believe that any example would have sufficed. I found this more relateable to me and the audience I thought I have, but if you have some examples of ancient sculptures, please share them, I'd like to learn more about it :D
@sadfrug
@sadfrug Ай бұрын
They're big, yea. But that's about all we can see. They're so damaged from time that any detail they could've had is gone. Ancient greek statues have stayed in much better condition and so are more appealing for most people
@khoavo5758
@khoavo5758 29 күн бұрын
What exactly is the problem? Did anyone say ancient people couldn’t observe veins and muscles?
@nelathan
@nelathan 12 күн бұрын
Your script is very high quality, just your reading is quite forced and repetetive.
@markitekta2766
@markitekta2766 11 күн бұрын
Thanks for you feedback. Yeah, having a script helps a lot, you can always check the first video in this series - What is Extended Reality that lasted for 28 minutes without a script and all the rest that were around 14 minutes, when scripted. I agree that talking while you know what the subject is feels more natural, however it takes a huge amount of time, especially when covering a couple of advanced topics that all serve one purpose. I'll try to improve in the future 😃
@Ryder7223
@Ryder7223 27 күн бұрын
Would it be more optimised to render shapes using an equation that represents them instead of many smaller triangles? Like the equation for an eclipse but 3D for the piggy bank for the general body shape, I don’t know if this can be rendered this way but in my mind it should allow virtually infinite smoothness 6:11
@markitekta2766
@markitekta2766 27 күн бұрын
I think this is a great observation. To answer a part of it - there is a type of a 3D model that is based on NURBS - Non-Uniform Rational B spline, basically a curve that is mathematically defined to be smooth (not to get into the geometrical aspect of it). However, I believe the calculation aspect would be problematic here because you need a finite or a discrete model to do the calculations instead of having an infinitely smooth model (think of meshes like raster images with pixels and NURBS like vector graphics - one gets jagged edges as you zoom in, while vector shapes always stay crisp). I'm thinking that is the main reason, but I'm happy to hear other opinion 😃
@wydua
@wydua Ай бұрын
Only good thing about nanite is that is saves time. But it should not be like it. Also for stuff like terrain you really could just develop a retopology tool that remeshes it once and creates a full quad mesh that i later easy to adjust either automatically or by hand.
@wydua
@wydua Ай бұрын
Hard to tell. I am a 3d artist not a game dev. What you try to do automatically, I just do by hand.
@markitekta2766
@markitekta2766 Ай бұрын
@@wydua Thanks for sharing your insights. As they say - good things take time, so if manual labor and adjustment with tedious tweaking is what gets great performance down the line then it is worth it, right 🙂
@wydua
@wydua Ай бұрын
@@markitekta2766Yeah. It's honestly really bugging me off that in modern days the games are just made as fast as possible because it's cheaper. It seems they forgot that you can't rush art.
@markitekta2766
@markitekta2766 Ай бұрын
@@wydua Yeah, we live in times where everything is needed as soon as possible, but when you take your time, you can produce something wonderful :D
@wydua
@wydua Ай бұрын
@@markitekta2766 :D
@perfectionbox
@perfectionbox Ай бұрын
i wonder if four dimensional beings consider our 3D geometry to be flat textures
@markitekta2766
@markitekta2766 Ай бұрын
That is a great observation. There is a book called - Flatland: A Romance of Many Dimensions by the English schoolmaster Edwin Abbott Abbott who explores this very notion of lower dimensional creatures being visited by a higher dimensional creature. It is understandable and relateable, because it is a square getting to know a sphere. You can read about it or watch a short video on KZbin regarding this topic, kinda puts everyting into a different scope of thinking.
@peter486
@peter486 Ай бұрын
they key is l how to instace ojects.
@markitekta2766
@markitekta2766 Ай бұрын
I think so too, the more instances you have, the less items in the tree, but the connections can become more complex?
@miranteazi
@miranteazi Ай бұрын
Yes, now I see, it is magic!
@markitekta2766
@markitekta2766 Ай бұрын
As Arthur C. Clarke said - Magic's just science that we don't understand yet. And we have more science to implement if we want to understand this magic 😂
@cube2fox
@cube2fox Ай бұрын
Interesting. I'm just not sure why all this occlusion culling still can't properly avoid overdraw.
@donovan6320
@donovan6320 Ай бұрын
@@cube2fox because occlusion culling is hard and it's not perfect. The hidden surface determination problem is a bit of a rough problem to solve.
@markitekta2766
@markitekta2766 Ай бұрын
If my understanding is correct, when you get significantly far from the overlapping geometry, the distance between the triangles or clusters becomes so close that it can be like an artifact that can occur when two triangle are overlapping in any general setting during modeling. Non the less, I'd like to hear other opinions regarding this question :D
@ThePlayerOfGames
@ThePlayerOfGames Ай бұрын
​@@markitekta2766 this seems like the best way of explaining it; you cram enough triangles in the scene you can either select relaxed culling and have significant overdraw or strict culling and have artefacting as triangles are removed to save time but there are so *many* triangles you eventually delete important ones through the culling Nanite makes sense on white paper to sell GPUs but ultimately is just a nerf overall.
@markitekta2766
@markitekta2766 Ай бұрын
@@ThePlayerOfGames Got it, thank you making it clearer :D
@Chisureme12
@Chisureme12 Ай бұрын
0:35 Wener
@markitekta2766
@markitekta2766 22 күн бұрын
Not sure what you mean, so if you want you can elaborate 😉
@logical_mania
@logical_mania Ай бұрын
Moses was not that swole lol was mana actually whey
@markitekta2766
@markitekta2766 Ай бұрын
😂 I guess that's why art is subjective, his own interpretation was like this
@mohammaddh8655
@mohammaddh8655 4 күн бұрын
the more i learn the more i want
@markitekta2766
@markitekta2766 4 күн бұрын
That's usually how it goes, and somewhere along the way you tend to realise that knowledge is power in science. Following this philosophy in life, you tend to realise the opposite, that ignorance is bliss. 😃
@CodyDBentley
@CodyDBentley Ай бұрын
good stuff!
@markitekta2766
@markitekta2766 Ай бұрын
Thnx, appreciate it 😉
@gibbaltaccountples9895
@gibbaltaccountples9895 Ай бұрын
What's going on in blender at kzbin.info/www/bejne/iIOudKSjmNmrgtU ? I've spent weeks trying to create a similar dynamic LOD system in blender and would love to see how somebody else has implemented it.
@markitekta2766
@markitekta2766 Ай бұрын
I just found the video to prove a point for what I was saying, but I'd like to see if anyone can offer more information about this
@Wittbore
@Wittbore Ай бұрын
you could use merge by distance to generate the lods and switch node to change lods at certain distances, checking with your camera(self camera object) in geo nodes someone could probably do it with less manual work 😅
@markitekta2766
@markitekta2766 Ай бұрын
Yeah, I saw that CLOD - continuous level of detail uses edge collapse to generate less vertices as you move away, but it seems to cause issues as well i.e. does not solve all the problems
@FrancescoDeo_
@FrancescoDeo_ Ай бұрын
00:50 ~ T H E D A V I D ~
@markitekta2766
@markitekta2766 22 күн бұрын
Not sure what you mean with the comment 😃, but got me thinking if David would be better captured digitally by using photogrammetry or digital sculpting.🤔
@netron66
@netron66 20 күн бұрын
rather that we also need the game developer to go back to basic and see how to reduce unnecessary meshes (like racing game or high building in rpg game we can literally reduce the background object like trees and buildings to be just a simple mesh with a high resolution texture skin)
@markitekta2766
@markitekta2766 19 күн бұрын
I think the concept of "imposters" comes to mind when you talk about that - where you use a simple quad mesh and apply an image on it (like on a tree) and hence remove the abudant vertex count but also increase the memory for the texture. I guess there is always give and take, you have to pay one way or the other 😃
@xymaryai8283
@xymaryai8283 20 күн бұрын
tangential comment, i know why normal maps are called that, but we really should have come up with a better name, like vector textures, or Vecstures or Vextures
@markitekta2766
@markitekta2766 19 күн бұрын
That's an interesting observation. I guess, it goes back to the beginning of computer graphics and it stuck. LIke when you have a normal vector, that's what exists in geometry as well. It is only logic to call any texture or mapping procedure associated to it as normal map. Normals are very specific vectors, so using just vec or vex would perhaps be misleading, especially in coming up with a new name, not sure what others think 🤔
@Gundir_Cap
@Gundir_Cap Ай бұрын
хороший контент, возможно, только из пожеланий, чтобы было осуществлено больше оптимизации. до\после
@markitekta2766
@markitekta2766 Ай бұрын
If I understood correctly, you are looking for specific numbers before and after using, that is a good suggestions, I'll try to post something soon enough ;-)
@Gundir_Cap
@Gundir_Cap Ай бұрын
@@markitekta2766 Ну, возможно можно не большой проект попробовать использовать. Просто я делаю свою игру, и тут трудный выбор как поступать. Использовать наниты или ЛОД. Если геометрия простая и плоская, то ее как будто нет смысла переводить в наниты. А вот фольяж как показала практика сделать нанитами стоит, это дает хороший буст производительности. В общем хочу сказать, что непонятно как делать правильно, надо столько всего учесть, без примеров иногда это просто по интуиции :)
@Gundir_Cap
@Gundir_Cap Ай бұрын
@@markitekta2766 Ну, возможно можно не большой проект попробовать использовать. Просто я делаю свою игру, и тут трудный выбор как поступать. Использовать наниты или ЛОД. Если геометрия простая и плоская, то ее как будто нет смысла переводить. А вот фольяж как показала практика сделать нанитами стоит, это дает хороший буст производительности. В общем хочу сказать, что непонятно как делать правильно, надо столько всего учесть, без примеров иногда это просто по интуиции.
@DJBLVZD
@DJBLVZD 18 күн бұрын
Nanite runs at like 15fps for me
@markitekta2766
@markitekta2766 17 күн бұрын
Thank you for sharing the info. It would be nice to get more information as to what is your setup like and what is the scene complexity so we can have more insight into why this happen. Of course, I'm not a part of the Epic team to gather data, but am curious is this tool performing as it should in the situations where it should be 😃
@nowonmetube
@nowonmetube Ай бұрын
You played the video recording of yourself at the wrong place 😩 What's the point of presentations when overlapping important information on it?
@markitekta2766
@markitekta2766 Ай бұрын
You noticed that right and it frustrated the hell out of me too, since the presentation and the recording of me was recorded together, so I couldn't edit it out later. You can see that there is a place at the bottom where I placed my camera feed, but for some reason, the program moved it up there 😒 Sorry for the mistake
@nowonmetube
@nowonmetube Ай бұрын
@markitekta2766 yes I noticed the big gray rectangle in the bottom right 😩 thought there's a perfect spot for the camera feed, why not put it there? 😂
@markitekta2766
@markitekta2766 Ай бұрын
When I started the recording, I placed my camera feed there and the screen was recorded like that. However, when I finished recording, I noticed that the bottom right was blocked like that and the camera feed was glue to the upper right corner so I had a double fail :/ I guess I'll have better luck next time, but I agree, it is annoying. In the previous video, I placed the camera feed in the bottom right and then overlapped with text from the image, so I had to manually place text over it, but you don't know what you don't know and hence you learn by doing, I guess 😂
@nowonmetube
@nowonmetube Ай бұрын
@markitekta2766 I see. Well I don't know which software you use, but with obs for example you get separate files for the recordings 😊
@markitekta2766
@markitekta2766 Ай бұрын
@@nowonmetube Great suggestion, I'll be sure to check it out and improve upon this, much appreciated for drawing my attention to this 😃😃
@Maro_Experience
@Maro_Experience Ай бұрын
Did you ever breath while doing this video?? i haven't seen you pause once holy😂😂😂
@markitekta2766
@markitekta2766 Ай бұрын
😂 I sighed a lot, actually, because there is so much to say at appropriate times, but I edited it out, thnx for bringing my attention to it :D
@Kimeters
@Kimeters Ай бұрын
Imagine what Michelangelo would do with zbrush.
@markitekta2766
@markitekta2766 Ай бұрын
Someone asked the same thing and I said, he would probably ask for people to do photogrammetry and digitize what he knows to do best 😀
@nawnaw4709
@nawnaw4709 18 күн бұрын
This is an exemple of a good technology used by devs to hide incompetence
@markitekta2766
@markitekta2766 17 күн бұрын
That is an interesting way of phrasing it, thanks for sharing your thoughts. I've been responding to all the comments here and I think we are all in agreement that there is no end all be all of tools, just a lack of knowledge for which tools to use, which is why experience and knowledge need to work together, I think 😃
@Cinnamon1080
@Cinnamon1080 15 күн бұрын
That's an odd way to describe it considering the problems it is attempting to solve. Developers spend huge amounts of time making these high poly assets they use to bake into normal maps and then they just.... never use those high poly assets. They just post them on ArtStation at the end of development. How is cutting down on the time spent "hiding incompetence"?
@markitekta2766
@markitekta2766 14 күн бұрын
@@Cinnamon1080 Thanks for sharing your thoughts. It seems that high poly movie quality and photogrammetric assets became important for implementation in real-time at some point at some point and hence generated a necessity for a technology or approach that can accomodate that. But I guess, since compromises have to be made in any of the approaches, there is always give and take, hiding incompetences can perhaps mean to avoid addressing all the possible approaches to find the optimal solution, but you just pick what is the solution to the set of criteria you want to address and satisfy and time is always the most important one of them 😃
@AvalancheGameArt
@AvalancheGameArt Ай бұрын
Witha a proper 2014 with projection and baking you can have all those minute details into TANGENT SPACE normal map. you don't need nanite and draw subpixel triangles. To do that it takes more time to draw every triangles by themselves lmao. """"optimization""""
@markitekta2766
@markitekta2766 Ай бұрын
I think there was a talk, perhaps about Lumen, when they said that since one pixel can only display one triangle, the exploration of subpixel triangles is not the way to go. But I agree that if you take the time to optimize your scene with manual work and not automated approaches, you can gain more than letting the system handle it for you ;-)
@AvalancheGameArt
@AvalancheGameArt Ай бұрын
@@markitekta2766 Also because PCs today are a lot more faster and you can get away with overdrawing most of the time. People should work on mid-poly nowdays and work on a global GI with PBR and you will see 100% resolution rendering with some multisampled anti aliasing in a crystal clear picture on your screen. Instead let's put boxes with billions of triangles just because.. let's lower resolution because somehow my game runs at 2 fps and upscale the frame with AI and "fix" all the artifacts with some vaseline, i meant TAA (which is also a cancer on performance.. > 1 ms is still a lot, you know how many things you can do with that infinite amount of time?.
@AvalancheGameArt
@AvalancheGameArt Ай бұрын
(rethorical question, you seem to understand this pretty well brother)
@markitekta2766
@markitekta2766 Ай бұрын
@@AvalancheGameArt Valid points, thank you for putting it out there ;-)
@roklaca3138
@roklaca3138 Ай бұрын
Just buy nasa supeecomputer to run it, such disconnect with devs owning $10k workstations ecpecting all who want to run this the same pc specs.... Good luck selling this to very small market of high end pc owners
@markitekta2766
@markitekta2766 Ай бұрын
You might be on to something there... People do need a capable enough graphics card to process all of this, RTX can handle the job but it can get expensive. But maybe we don't need a supercomputer, if we have the RTX? 😆
@roklaca3138
@roklaca3138 Ай бұрын
@markitekta2766 i guess if you own at least 3080 and up then yes, 3060 and 4060 on the other hand not so much
@markitekta2766
@markitekta2766 Ай бұрын
@@roklaca3138 I like that you provided first hand experience, that helps out a lot. Yeah, expensive equipment can be limiting but even the "supercomputers" of half a century ago perform less than a smart phone, if I am not mistaken. The point is, in time, perhaps all of this will become affordable ;-)
@roklaca3138
@roklaca3138 29 күн бұрын
@@markitekta2766 i see my gpu strugle with these over the top demanding features that, like i said, are useless for us on cheaper gous but what do i know, its a rich pc gamers hobby
@YayDragons
@YayDragons Ай бұрын
I read the title as "why"
@markitekta2766
@markitekta2766 Ай бұрын
:DD I guess that would have gotten even more comments here, some would say why not, others would go for - beacause of realistic depictions, while some would say we don't, we can use less performance heavy approaches. The diversity in optinions based on reason and fact is always welcome to prosper the development :D
@HedgehogGolf
@HedgehogGolf Ай бұрын
Are you planning on crediting SimonDev for literally all of the animations?
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for pointing this out. I agree that giving credit is an important part of creating a overview presentation which is why I gave credit in the video under each of the animations or visualizations as best as I could. However, if you believe I should do it in any other way, I am open to suggestions. As for the remark about all the animations, I think I used his great examples in the portion of the video regarding culling and optimization, from which I learned a lot, which is from around 19 minute mark to about 23 minute mark. This is not written as an excuse, just a way to get the facts straight. 😃
@HedgehogGolf
@HedgehogGolf Ай бұрын
@@markitekta2766 I suppose the part I take issue with is that basically every graphic you show is just from someone else. And everything you say seems to just be paraphrased from the source material. Which isn't necessarily a bad thing when done in moderation, but it just seems like you do this to excess. even ignoring that, I feel like you could at least use the papers themselves as sources. You can also put citations in the description or compile them in a Google Doc and link to that in the description. That way people don't have to search up the video titles or go digging through the Blender Stack Exchange to find the original discussion that the GIFs were made for. Or if you made your own then you wouldn't have to deal with a mishmash of vastly different UI versions and bad resolutions. For example, at 5:01 you have a citation for the high-poly/low-poly GIF, but the article is not trivial to find. At the very least you should put the website it's from (Treehouse), and if you actually link to all your sources you eschew the whole issue entirely.
@STANNco
@STANNco Ай бұрын
im not deep in unreal but cool to see comments about threat interactive. Let's all work together to find the best solutions
@markitekta2766
@markitekta2766 Ай бұрын
I agree, threat interactive seems to have stirred the pot with this Nanite application issue and bringing everyone to discuss it is definitely a win, regardless of what the outcome is
@themeangene
@themeangene Ай бұрын
I'm working on a project where the depth buffer is used to determine polygons a z distance greater than x, generate a very simple LOD using multiple objects at once (reduce draw calls) and then combine the materials into a "grouped" mip map. Imagine a scene with a castle in the foreground and trees & mountains in the background. My code when done would combine the mountains and trees into a reduced LOD and then merge the materials into a single material. I've been struggling with this second part. I've been thinking about some of these questions for years. On personal projects I have gone back to UE4 because UE5 has been terrible for optimization.
@OverJumpRally
@OverJumpRally Ай бұрын
So... HLOD?
@markitekta2766
@markitekta2766 Ай бұрын
Thanks themeangene for sharing. Yeah, I though of HLOD right of the bat, which creates atlas materials. I was asking the same think for Nanite, like if it combines clusters and triangles, why not materials. Especially if we have virtual textures that can aid in only showing the visible parts. But perhaps the preparation of this atlas and the memory would impact performance, which is why they stuck with traditional pipeline for these.
@elchippe
@elchippe 9 күн бұрын
That is why humans invented normal and bump maps.
@markitekta2766
@markitekta2766 8 күн бұрын
Thanks for sharing your thoughts. 😃 Humans are great, I believe, when it comes to solving problems. When they wanted more detail in their computer generated scene depictions, but did not have enough time or memory or computational power, they invented normal and bump maps. I think, similarly, humans invented Nanite as well, to solve the issues that bump or normals maps have, like manual work, memory consumption and similar. I like to know what tools there are so I know which ones to apply on a per case approach 😃
@elchippe
@elchippe 8 күн бұрын
​@@markitekta2766 Really? I had been playing games for quite and even worked as a hobby in some but feels like normal mapping is quite inexpensive for GPUs compared to modern tessellation techniques like Nanite. NM and BM IMO feel like mature techniques that doesn't require a lot of added geometry to make something look detailed. Of course is time consuming making good looking normal maps and I understand the attraction of tessallation techniques like Nanite because generate a lot of on the fly geometry but that added geometry I think could be taxing to GPUs. I don't know maybe i am wrong. It could be that is modern lighting techniques and not Nanite that is causing the performance issues in modern games.
@markitekta2766
@markitekta2766 8 күн бұрын
@@elchippe Thank you for the great response. I haven't been working on any games, but have been using bump and displacement maps in architectural visualizations and I agree that they are an elegant approach to achieving greater detail, diminishing time necessary for modeling something in great detail. I guess, times are different today and since everything is available and needed as soon as possible, people opt for most affordable approaches, not the best suited. I start by thinking I might be wrong as well, and I think that is a great approach in research and in life, because it leaves room to get more information. As they say - if you only do what you do, you will never be more than you are. 😃
@attractivegd9531
@attractivegd9531 28 күн бұрын
9:05 it's spelled wrong: trivially.
@markitekta2766
@markitekta2766 28 күн бұрын
Thanks for the comment, I did not notice that. That image is a snapshot from the original lecture, and even though proper grammar is important, in the end if we understand each other, that counts as something? 😃
@fishnpotatoes
@fishnpotatoes Ай бұрын
Would you be able to credit SimonDev for your use of their graphics in the culling section? It appears you took some visualizations from this video of theirs: kzbin.info/www/bejne/eXm8qZ2mjsqjla8
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for pointing it out. This was brought to my attention in another comment as well. I agree that giving credit is important which is why I gave a reference under each of the visualizations and animations that I used in this overview. If you think I should do this in a better way, since I appreciate SimonDev's work and what I have learned from his videos, I'm open to suggestions.
@goob8945
@goob8945 Ай бұрын
⁠@@markitekta2766I would put a link to their video in the description too or as a pinned comment
@DefleMask
@DefleMask Ай бұрын
We are need a full raytracing. GPUs with hundred thousands RT cores
@panjak323
@panjak323 Ай бұрын
Number of RT cores is hardly the problem.
@markitekta2766
@markitekta2766 Ай бұрын
That is an interesting observation. Currently we have, if I'm not mistaken, over 10 000 cores in a GPU chip, each with a capability of running 3 billion operations every second. But still, if the pipeline is not optimized, a simple scene can cause latency or display issues. I always return to the quote of Jeff Goldblum from Jurrasic Park saying Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
@DefleMask
@DefleMask Ай бұрын
@@markitekta2766 I mean, for large scene (10+ billion triangles) raytracing with 100 million rays should be much faster than entire scene rasterization.
@markitekta2766
@markitekta2766 Ай бұрын
@@DefleMask I can see logic in this, but I guess in the end we still have to rasterize it for display purposes, so perhaps they are trying to kill to birds with one stone, even though it can be slow at times?
@zoltanrobertlovas4672
@zoltanrobertlovas4672 Ай бұрын
Except both nanite and lumen look horrid in production with numerous artefacts, jittering and smeared images on movements. I'm not saying it's not a great tech and great progress, but without their myriad of issues fixed they are pretty much pre-alpha and should not be used in production. Source: every single production use-case of lumen and nanite out there.
@markitekta2766
@markitekta2766 Ай бұрын
Thank you for sharing, I was not aware of that. When I saw the Nanite video 3-4 years ago, I had no idea how it worked or why. I think I have a better understanding now, but not about its use in practice. If anyone has any examples, it would bring a new note to the discussion section, perhaps?
@alexanderdouble
@alexanderdouble Ай бұрын
​sadly, nanite isn't faster than using basic lods, but was promoted in many epic's videos as is. it produces overdraw that's wastes gpu computational resources.@markitekta2766
@K3Techs
@K3Techs 20 күн бұрын
I was aware of lumen causing ghosting and aftereffects in lighting as it accumulates samples over time but nanite? I thought the issues you describe were strictly linked to temporal image reconstruction and denoising techniques, not nanite. Only issue I knew with it was the problem with severe quad overdraw that tends to sap performance on poorly optimized meshes (which admittedly are the #1 most commonuse case for nanite)
@GenericInternetter
@GenericInternetter 11 күн бұрын
Dude it sounds like you're just reading a lot of text generated by AI
@markitekta2766
@markitekta2766 10 күн бұрын
Thanks for sharing your thoughts 😃 For the comment about reading - given that there is a lot of material to cover, I opted for scripting the material and reading it so I can get through it more quickly and if necessary, people can rewind to the section that they really need to hear again. An hour long lecture would be too much to handle in one sitting and hence can lose its potential to deliver information effectively. As for the AI generated content portion of the comment, sorry to have to dissapoint, but AI was never of any help in coming up with any of the text for any of the videos. It cannot produce anything original, it can barely follow instruction on what to write and how and it can provide false information which is why you always have to fact check everything. In some cases I use it for proofreading, which I also have to verify if it fits in the entire script. Sorry if the content sounds disingenuous 🤔
@realdragon
@realdragon 18 күн бұрын
Me watching how to optimize graphics for no reason
@markitekta2766
@markitekta2766 18 күн бұрын
Who know why that can be useful later down the line😃 Whatever, the case may be, thnx for watching
@Okabe_RintaroIF
@Okabe_RintaroIF 10 күн бұрын
No there, saved you all 28 minutes
@markitekta2766
@markitekta2766 10 күн бұрын
😃😃 Great response, even though it may ruin my CTR 😃 I agree it is a bit long, but I'll create a shorter one just focusing on performance issues
@126sivgucsivanshgupta2
@126sivgucsivanshgupta2 Ай бұрын
some comments in this video are really making me mad, people really dont know what they are talking about, they dont know how the tech works and where it is useful, guys if ur a game developer, you care about making the most performant real time game ever (as needed with esports titles) you offcourse would have billions of polygons in ur scene, and hence wouldnt benefit with nanite, nanite really isnt made for that, it is made for higher details while still keeping the game realtime (around 16ms per frame). Nanite really is a big thing in graphics programming, dont discredit it because some devs dont use it properly. (infact I would say UE5 is really not well optimised in the first place, they compile 1000+ pipelines for no reason, if you really really care about performance, you would be writing ur own custom renderer).
@markitekta2766
@markitekta2766 Ай бұрын
Thanks for sharing, I really appreciate your opinion. As a friendly suggestion, perhaps you shouldn't take other people's opinions as something that should be corrected or heavily debated. They have a different perspective on the topic which helps all of use broaden our horizons. We can only show which paths we think are correct, it amongst many, chose the ones to take. Hopefully they all reach the same place in the end. Having such a great comment section on this really brings a smile on my face as I see so many perspective I never though off, like the one you are making ;-)
@moravianlion3108
@moravianlion3108 Ай бұрын
Ok, now show us UE5 game that runs well and doesn't look like Fortnite.
@markitekta2766
@markitekta2766 Ай бұрын
If you are asking me specifically, I'll see what I can do, but if you are asking the community, I'd like to hear about it as well.
@OverJumpRally
@OverJumpRally Ай бұрын
My game, for example.
@mitsuhh
@mitsuhh Ай бұрын
Silent Hill 2 runs well apart from the occlusion culling bug.
@blackface-b1v
@blackface-b1v Ай бұрын
Black myth wukong It looks Really good So it justifies being demanding
@Capewearer
@Capewearer Ай бұрын
The Talos Principle 2. Unlike infamous Wukong, it wasn't noticed for such catastrophic performance troubles.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН
When you have a very capricious child 😂😘👍
00:16
Like Asiya
Рет қаралды 18 МЛН
NVIDIA Shocks CES 2025 with HUGE AI Chip Breakthroughs!
26:43
LunarTech
Рет қаралды 3,4 М.
When Optimisations Work, But for the Wrong Reasons
22:19
SimonDev
Рет қаралды 1,2 МЛН
Better Mountain Generators That Aren't Perlin Noise or Erosion
18:09
Josh's Channel
Рет қаралды 454 М.
Unreal Engine Sucks? You're doing it wrong
20:31
Dallas Drapeau
Рет қаралды 28 М.
What's new in Pixel Composer 1.18.6
10:11
MakhamDev
Рет қаралды 4 М.
It's Time For Gaussian Splatting // Tutorial
20:14
Default Cube
Рет қаралды 162 М.
Why RISC-V Matters
13:42
ExplainingComputers
Рет қаралды 5 М.
An introduction to Shader Art Coding
22:40
kishimisu
Рет қаралды 1 МЛН
Easy Way To Diagnose and Optimize Rendering Issues
23:18
MARKitekta
Рет қаралды 968
Kerbal Space Program 2 Was Murdered.
11:26
Bellular News
Рет қаралды 983 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 120 МЛН