Struggling with this for awhile, you helped majorly dude. Thanks!
@ArchitRege2 ай бұрын
Thanks a lot for the indepth walk through
@jmr2008janАй бұрын
It would be pretty neat to have a reference library of these available online through a web 3d app.
@carlossuarez92723 ай бұрын
Today I have seen a lot of content around this topic. It is a technology with great potential. I have tried KIRI Engine, Luma IA and Postshot and by far the latter gives me better results in Unreal Engine. The model is better rendered. I suppose it is because locally I have more control. I did notice that the model lost quality when using it in Unreal Engine but I didn't know why until I heard your explanation of the limitations of Niagara. At the moment I'm training a model of a Castle based on a 360° aerial video that I found on KZbin for my game. Once I've the final result, I'll share my results here. Thanks for all the breakdown.
@AntonRuabicev7 күн бұрын
Did you succeed?
@TheWingEmpire4 ай бұрын
this is amazing man!! good job
@levelupvfx4 ай бұрын
🙏
@RogueBeatsARG2 ай бұрын
Damn 944 is so good looking
@DartheomusАй бұрын
This software is absolutely amazing, and I think it will only get better as AI progresses. I've found this software really doesn't like it when you miss an angle. So you assume it's going to know how to render something like this car if you walk around and then point down on top. However, if you then try to look at the car from a low angle, the entire model breaks up. Also, and more frustrating is the fact that there is a huge resolution hit. You can feed it really high quality video, and what you get back looks like 1/10th the resolution if that. I'm hoping that can be addressed soon. Finally, I really wish there was a streamline way to rebuild these splats into 3d models. It would be really useful to couple this technology with 3d printing, but it's not very easy at the moment.
@nbms950Ай бұрын
Hey thanks for the tutorial, really concise. Do you happen to know if you can then export the PLY out of Unreal as a FBX or other 3D mesh file??
@Densmode3dp12 күн бұрын
If you listen he says he exported in a .ply format
@zerosaturn4163 ай бұрын
thank you so much for this tutorial , for months i have been trying to find a simple program to train gaussian splats locally but all of them never seemed to work because they were to advanced or i would get errors.
@levelupvfx3 ай бұрын
Of course! Happy to help, that’s exactly why I wanted to make this tutorial!
@TheBadBone23Ай бұрын
Can you somehow use this as a 3D mesh? Something like replacing 3D scanning with this method...scan an object and 3D model something around it
@Strawberry_ZA2 ай бұрын
awesome porsche!
@anoopak49283 ай бұрын
that Mamukkoya Meme lol 😄
@hxnoon_3131Ай бұрын
ikrrr
@yvann.mp43 ай бұрын
thanks a lot
@gaussiansplatsss3 ай бұрын
is there a limit of uploading photos in postshot?
@levelupvfx3 ай бұрын
There is a suggested limit on their documentation of 100 to 300, but since everything is local, you’re not actually uploading anything so you have no limit to how many images you can use. For example I’ve run splats using 1500 images, and I’ve run ones using a few hundred. In general, more images will help, but there deffinitly is a sharp falloff from where adding more images don’t add any more detail, they just slow the training down
@sdsfa83372 ай бұрын
Been using this programm for a while and I love using it with ue5, btw do you know how to import splats in blender with color atribute, cuz I do not see color atribute export setting in postshot:(
@levelupvfx2 ай бұрын
Sadly I pretty quickly gave up when it came to Gaussian Splats in blender, so I only tested it with blender before I started using postshot, I think the color data should be be in the PLY file, but if not, I’m sure there’s a way to get it out seperately
@cedimogotes8662Ай бұрын
@@levelupvfx how to get the color data to blender?
@Utsab_Giri3 ай бұрын
When you say that it runs locally, does that mean it doesn't need to be connected to the internet? Thanks!
@levelupvfx3 ай бұрын
Yes! Nothing you make is processed online, everything happens on your machine, I think you may need to be connected when you first start up because they need you to log in with your account, but after that you are good
@deniaq18434 ай бұрын
Thumbs up! :)
@PGANANDHAKRISHNANАй бұрын
dhe nammade mamukoya
@ElliottK4 ай бұрын
Still no spherical harmonics in LUMA AI :(
@levelupvfx4 ай бұрын
I know! I’m hoping they are able to find a way to get them working with Niagara, but it might be an engine limitation
@korujaaАй бұрын
there is NO aplication, just showing off
@AlexTuduran2 ай бұрын
Of course they can cast shadows. It's just not coded yet.
@levelupvfx2 ай бұрын
Deffinitly let me know if you have a way to get shadows working! Currently on the Luma AI plugin documentation they claim “Shadows are not supported in Gaussian Splatting scenes” I figured it was a limitation of them using sprites in thier niagra system, which would make it rather difficult to make an accurate shadow. but if there’s a simple coding fix or something that makes them able to, that would be awesome
@AlexTuduran2 ай бұрын
@@levelupvfx It's not a simple coding. You'd have to capture the depth buffer from light's perspective, in the shader that renders the actual splat rendering you'd have to compute the fragment's position in light's space, make the comparison between the depth buffer and the distance to light and decide if the fragment is lit or not. And that's just the basic approach, but since the splats are puffy, additional shadow filtering techniques would have to be employed in order to produce a smooth shadow. Or implement volumetric light scattering where the splats could be interpreted as cloud density and also have self-shadowing. There ar multiple ways, but it's definitely possible. I was kind of expecting that since Unreal supports lit particles, that would kind of work automatically.
@Patheticbutharmless2 ай бұрын
To be honest, I don't see the benefit, for me, on photogrammetry. The wireframe is, likely, still a big mess. There is nothing much you can do with it. Professionaly. Yet. Since the method cannot understand what kind of surfaces it is capturing everything has this very bland, very uniform self iluminated look. How to give areas different types of roughness or, for example, metallic values ect? It isn't possible. Seperating parts of the mesh will look awful with lots and lots of jagged edges, smoothing these out will take about forever. Trying to force any kind of remeshing or whatever will distort everything beyond recogination I imagine unless the face count is 50 million upwards. At least for simulated enviornments you can't really mix photogrammetry(or this) well with modeled 3d objects because they will not "mesh"(pun by accident). It's either fully modeled or fully captured.(Ok I have to correct this, in a brightly lit outside enviornment they can be ok looking but, so far, because you don't have to delight them. Personally I have always need to retexture objects with the captured diffuse as a starting off point. Without corrections it just doesn't hold up. It just always looks way out of place. There is so much more to a object than its mere shape and basic color value. We get a lot of information about something by the types of reflections and refractions from a object that HAVE to be simulated via the information a model surface provides for the renderer. In a few years, when some ai will know what the object is, after the capture process and understands what color area corresponds to what type of surface(basic, a painted car hood with rusted patches on it, or a won out leather jacket ect), I will look at this again.