Gaussian Splatting Is Awesome!

  Рет қаралды 95,394

Gamefromscratch

Gamefromscratch

8 ай бұрын

In a graphics paper released at SIGGRAPH 2023 Gaussian Splatting is taking the graphics world by storm. Similar to NeRF, a way of recreating scanned real world objects from video or photo with stunning results, the major difference with Gaussian Splatting is a big deal for game developers... it can work in real-time!
There are already Gaussian Splatting implementations in Unreal Engine (paid) and Unity (free! ;) ). So, we check out the free version in this video.
Links
------------
gamefromscratch.com/gaussian-...
-----------------------------------------------------------------------------------------------------------
GFS Patreon : / gamefromscratch
GameDev News : gamefromscratch.com
GameDev Tutorials : devga.me
Discord : / discord
Twitter : / gamefromscratch
-----------------------------------------------------------------------------------------------------------

Пікірлер: 307
@nicolasdiolez
@nicolasdiolez 8 ай бұрын
Very proud that you used my Arc of Triomphe model as the example for photogrammetry 😅 cool video, I need to learn gaussian splatting, it seems crazy good!
@nicolasdiolez
@nicolasdiolez 8 ай бұрын
​@@JohnDavid888 Thank you for the insight! It seems that, for now, it's not suitable for professional use, but I suppose it's going to evolve.
@Jeal0usJelly
@Jeal0usJelly 8 ай бұрын
What a time to be alive!
@Clawthorne
@Clawthorne 8 ай бұрын
What a time to be alive!
@kcfresh53
@kcfresh53 8 ай бұрын
Get your papers fellow scholars
@brodriguez11000
@brodriguez11000 8 ай бұрын
The industrial revolution was indeed a good time.
@FuZZbaLLbee
@FuZZbaLLbee 8 ай бұрын
Holding my papers tightly
@Eichro
@Eichro 8 ай бұрын
Imagine where we'll be two more papers down the line
@johnny2552
@johnny2552 8 ай бұрын
Bro your channel is unlike anything else, I appreciate you moving like a madman on getting these videos put out for game devs and artists. Thanks!
@theaninova
@theaninova 8 ай бұрын
I think what probably stands out most to me is that no matter what angle you pick, it just doesn't look like a bad render. It looks like a blurry or smeared photo, or maybe a painting with a particular style. I guess the big question is gonna be can we apply lighting in real time to it in some form, and can we compose multiple of them together. It seems to be really really good at rendering trees, they just looks so fuzzy and detailed from a distance even when they're just a few blobs. I'd be interested to see a racing game with a full scan of the track using Gaussian splatting for the more distant environment and traditional rendering for the road and car.
@s4shrish
@s4shrish 8 ай бұрын
I feel like this renders stuff closer to the way we as human perceive stuff. Like a point of light when defocused is circular stretched in shape. Basically a bunch of dots that bleed into each other more or less based on focus. Which is kinda this as well.
@marsimplodation
@marsimplodation 8 ай бұрын
No way you are talking about this paper right now, was about to read it tomorrow for college related stuff xD I have a course on computer graphics dealing with the latest research on bachelor level concepts and this is one of the possible papers to work with
@gamefromscratch
@gamefromscratch 8 ай бұрын
Start with the Aras rundown before jumping into the paper, it's a really elegant TL;DR summary.
@marsimplodation
@marsimplodation 8 ай бұрын
@@gamefromscratch thanks I will. Will need to read that paper either way tho
@augustday9483
@augustday9483 8 ай бұрын
So far it seems like these scenes are basically one big thing that you import into your project. To make it usable for a proper game, I'm imagining a future where you have individual models composed of splats (for example, a bike or house) which can then be imported into a larger scene. However, the problem with that is that these splats seem to have their lighting baked in. If you moved the bike into a different scene with different lighting, it would look really out of place. I find it hard to imagine that this would ever take over and replace polygonal rendering.
@olwiz
@olwiz 8 ай бұрын
Oh but its possible. Just not yet. Youre forgetting this has just been released without optimization witch the authors thenselves realize- then theres no hardware tuned to this. Polygon rendering was created in the 60s or so in computers, took a few decades until we had hardware tuned for poligons and the tuning never stopped. The first gpu tuned for this will likely be x3 times the current performance (the first step always have the bigger gains). There may be some algorithm possible to make the blobs shift for light but even if not you forget how software development and games in particular have a history of using tricks... For example they may use 'invisible poligons'(simpler and with no texture) for colision but also someone come up with an way to use said poligons to inform the lighting; So say first iteration would be horrible (splashs AND poligons, heavy) but then they update the methods, find shortcuts between splash data and polygons, simpler polygons needed, gpus get in... In the past few decades we had what, 4 different named anti-aliasing methods, lighting methods and so on- not only each aproach improved so did the gpus tuned for the tricks used, then support software around it too (like directX, vulkan etc)... ...and something like the above would be possible without adding AI in the mix- now with ai? The same way ai is being trained for upscaling and frame generation, ai on the gpus maybe even with a dedicated chipset could be trained to deduce and recreate light and shadows from splashs on the fly. And were talking today tech only, god knows what new breakthroughs we will have on hardware or neural networks- all the fast pace weve been seeing had zero new milestone improvements(hardware wise). Just the other day ive read intel is tinkering with glass for chipmaking that could break a physical barrier for computing currently I 'predicted' current gen AI like a decade ago when i first read on neural networks at UNI wich at the time was far faaar away from anything usable yet. Im no seer, im far fron the only one. You just need a bit more imagination to extrapolate the likely path of current gen tech, the entire industry does it- the only incognita is how long it will take and how exactly, but very close aproximations are very easy. And i dont think this will take a decade to come up. It may, we can never know, but besides the current pace with AI and gpu tuning via AI we have to remenber that nerfs and such splash tech is already like a decade old... Heck i just realized the kind of tricky nanite did for polygons could come up for splashs too- something between algorithm, lods and ai around density of blob/splashs, so it could have higher resolution(points) then the examples in the video for things seen up close but dynamically lower the density on the background, distance to camera and all... The more i think about it more possible aproachs come up. You just wait, academia will be all over this with students trying different stuff and as soon as the first gpu or drivers tune for it you bet game devs will give it a spin too- the folks at unreal and unity definetly... heck the way nvidia is the moment they saw this some calls were made for a new team to toy with this
@Teodosin
@Teodosin 8 ай бұрын
Surely that lighting problem can be figured out. Just needs time for people to figure it out.
@jensenraylight8011
@jensenraylight8011 8 ай бұрын
Hacky solutions always leads to ton of soul crushing Cleanups afterwards. professionals found this out the hard and painful way.
@Teodosin
@Teodosin 8 ай бұрын
Wow, such pessimism
@jensenraylight8011
@jensenraylight8011 8 ай бұрын
@@Teodosin not pessimistic but being a realist. It's easy to be overly optimistic if you're an amateur that had zero knowledge of how things work. Also people who used this kind of hacky technique don't give a damn about art direction, this kind of thing is just something that gets in their way and should be eliminated. Which will result in generic game. At this rate, you should just write a prompt to make a full game for you, why bother create a model or write a single code? You Already generate the model, why did you stop halfway, go generate the whole dang game
@kyoai
@kyoai 8 ай бұрын
I could think of this being used in parallel with traditional methods : Use this method to render specific static mesh models that are high in detail, while other parts of the game world, especially dynamic parts that are animated, stay as-is with polygons + textures.
@vitordelima
@vitordelima 8 ай бұрын
It can be animated by transforming the particles the same way it's done to vertices in regular models for example.
@jlewwis1995
@jlewwis1995 8 ай бұрын
@@vitordelima yeah I don't see why you couldn't at least to simple animations on the models (though considering the high point density that's probably required maybe it would be best to only use simple n64 style animation (with the different parts of the model being separate) and not full on skeletal animation for now
@UltimatePerfection
@UltimatePerfection 8 ай бұрын
@@uusfiyeyh I'm sure that it's a hurdle that we'll eventually overcome, just like in early 3d games all shadows and depth was baked into the diffuse texture, only later we've got stuff like bump and normalmaps. But yeah, for now it is more of an archvis/survey tool than actually useful for gamedev.
@morgan0
@morgan0 8 ай бұрын
and it could be useful to turn a complex highly detailed scene with raytracing into something that could be played on much more normal hardware, maybe with some level of reactivity added in to allow the stuff that can move to interact with it
@Kumodot
@Kumodot 8 ай бұрын
Amazing breakdown to this tech. One thing that I want to see, and probably will happen real soon is a combination of many Gaussian splatting scenes to cover bigger areas and single assets ready to compose scenes
@vitordelima
@vitordelima 8 ай бұрын
NeRF seems to have something like this already and maybe it can be adapted to Gaussian splatting.
@joshwent
@joshwent 8 ай бұрын
Absolutely jaw dropping technology. So many practical applications for simpler photogrammetry type tech; virtual museum walkthroughs, interior building walkthroughs like google maps but indoors, even maybe self scans to send to a telemedicine doctor. Just endlessly cool possibilities! For games however, I'm honestly not excited about this. Graphics with even just a pinch of intentionally designed style are much more immersive to me than just playing in a perfectly representative world. I already spend every day IRL, show me something NEW! 😁
@gamefromscratch
@gamefromscratch 8 ай бұрын
Ok.... what about a Wallace and Gromit or Fraggle Rock style world, but physically modeled then captured with Gaussian Splatting? Or old style stop motion Harry Hausen style worlds, but scanned and playable! ;) Although honestly a traditional pure CG workflow would probably still be cheaper and more effective.
@joshwent
@joshwent 8 ай бұрын
@@gamefromscratch Clayfighter 2023?! I love it! 😆
@carpenterblue
@carpenterblue 8 ай бұрын
​@@gamefromscratch Actually, youtuber Olli Huttunen did really cool test where he used 3D model made in Blender and converted it to 3D gaussian splat. You absolutely can mix and match. The splat is built from sequence of pictures. Technically speaking you can animate flythrough of a room on paper, scan it, send it to computer and have 3D splat of that.... if you are insane enough that is. Also.... I think, there is high potential for someone just straight up building a sculpting tool/painting tool eventually in the vein of quill. This is absolutely GIANT thing for games.
@JB-fh1bb
@JB-fh1bb 8 ай бұрын
The Gaussian splat doesn’t have to use point clouds or photos and could be the actual rendering engine for 3D games. The biggest improvement here over traditional pipelines is the *massive* reduction in computing while maintaining (and arguably improving) visual quality. Imagine this being used for a next-gen version of Dreams that can be played on the Quest 2.
@Vaeldarg
@Vaeldarg 8 ай бұрын
The interesting thing to me is this tech looks familiar: when was looking at companies in the VR/AR space, when "light fields" was causing buzz through ones like Magic Leap, found an obscure company named "Euclidean". Their idea was using point clouds (the "light fields") for VR, and liked showing off the detailing. This seems to be a much-improved evolution of that.
@capsey_
@capsey_ 8 ай бұрын
I feel like the perfect usage for this tech is Google Street View (especially in VR). You don't need dynamic lighting and objects, target object details are important and having big open world is not a requirement. I wonder if it's possible to have multiple scenes using splats and smoothly transition between them as camera moves to make road moving in street view less weird than it currently is.
@Kumodot
@Kumodot 8 ай бұрын
I really want to see applications using Gaussian splatting in VR in something like the quest3. That needs to happen!
@kuromiLayfe
@kuromiLayfe 8 ай бұрын
It is pretty much the Unreal’s Nanite tech but on a cloud point level instead of polygonal, biggest issue is that at a larger depth you get a noisy dithering effect, which especially in VR can cause nausea when rendered at 90+ fps in real time… amazing for still scenery but not so much for motion.
@fledgeking
@fledgeking 8 ай бұрын
I think the quest 3 would probably have trouble with the polygon count, the render distance might have to be pretty low.
@steven11101010
@steven11101010 8 ай бұрын
I assume you are referring to being able to navigation in a real world. But that's not really the use case. The key issue is the way point clouds are generated. They are generated *around* an object. That's what enables the recreation of the object in 3D. For recreating environments, you need the inverse, which isn't this. You can see the issues in the video when Mike ventures just a few yards from bike.
@pixelfairy
@pixelfairy 8 ай бұрын
Xr2 is more about texturing than geometry. It's the opposite of what it's made for. You could post process into a low poly model, but then you might as well use nerf or traditional pg.
@fledgeking
@fledgeking 8 ай бұрын
@@pixelfairy Yeah, they're kind of a package deal
@georgezubat7225
@georgezubat7225 8 ай бұрын
This really reminds me of dreams. Very detailed in specific areas, but when you try to remember non-focal elements it just isn't there. This would be a great artistic rendition of dreams!
@nebuchadnezzar916
@nebuchadnezzar916 8 ай бұрын
Interestingly, when I used to have OBE's, I experimented with focusing on distant details and things were grainy, not unlike this.
@bgrz
@bgrz 8 ай бұрын
This technology has a lot in common with the game/engine called Dreams on Playstation.
@DrkFX
@DrkFX 8 ай бұрын
Agree, looks similar to how Flecks are rendered in Bubblebath engine (Media Molecule's Dreams, Playstation).
@georgezubat7225
@georgezubat7225 8 ай бұрын
I always wondered how the rendering in that game worked!@@bgrz
@TeckGeck
@TeckGeck 8 ай бұрын
I was thinking of Dreams on the PS4 too
@maymayman0
@maymayman0 8 ай бұрын
Mike your channel is awesome and thank you for covering all the different stuff you do!
@kreur
@kreur 8 ай бұрын
I think we will see this sooner in more static-ish applications like real-estate virtual tours. Maybe some experimental games that happens in static-ish environment like 1 house. But who knows maybe it will be game production ready in a year.
@webgpu
@webgpu 8 ай бұрын
i rarely comment on videos about its quality and content, but i had to come here to congratulate this channel's creator because of the good rhythm, speed and clarity of the pronunciation on a technical topic.
@_remblanc
@_remblanc 8 ай бұрын
I can imagine someone pulling the PS1-era tricks with models running over these at fixed-camera angles to produce a scene. It would be a highly unorthodox workflow, though, and quite pricey at that.
@ekstrapolatoraproksymujacy412
@ekstrapolatoraproksymujacy412 8 ай бұрын
it works as blobs in 3d space, just like volumetric cloud or fire, no need for any "PS1-era tricks" it can coexist with standar mesh model rendering no problem
@OutrunCitizen
@OutrunCitizen 8 ай бұрын
What we need is for someone to make a Gaussian Splatting modeling program.
@MaikoYT
@MaikoYT 6 ай бұрын
Why model when you can simply create that object in real life and scan it in?
@MrEnkelmagnus
@MrEnkelmagnus 8 ай бұрын
Finally someone explained all this cool new tech in a way i understand.
@carlosrivadulla8903
@carlosrivadulla8903 8 ай бұрын
what a time to be empathic!
@DessertMonkey
@DessertMonkey 8 ай бұрын
Last time I saw graphics like this, they said "these are grains of dirt".
@ScibbieGames
@ScibbieGames 8 ай бұрын
I was thinking about working on a Godot implementation but because instanced rendering (with MultiMesh) can't be culled on a per mesh basis I was uncertain of whether it was achievable with decent performance. But I'd like to hear from more experienced Godot developers, cause I'm a noob. The reference rendering implementation is also a fairly complicated one that used cuda to efficiently order the splats for rendering. Which doesn't translate well into the Godot renderer to begin with.
@vitordelima
@vitordelima 8 ай бұрын
Some renderers use transparent quads aligned with the viewer (similar to Doom's enemies and items) to render this instead.
@SMorales851
@SMorales851 8 ай бұрын
One could use compute shaders to oreder the splats, but I don't think Godot allows you to perform compute operations on the main RenderingDevice, and there's no way to share buffers between devices without transferring the data to the CPU and then to the other RD, which would be slow.
@Polygarden
@Polygarden 8 ай бұрын
It's like a smart interpolation between different cameras. But this exact feature is also it's disadvantage, as it's quite hard to remove said lighting from the source, to create usable game assets. You have stretched splats which also contain the lighting. (and in this case only the lighting as you have captured it) Depending from what angle you look at it, they are differently stretched and as such are able to shade your scene correctly, but the "lighting splats" are belonging to your scene in the very same way as solid objects. It is probably possible to make it work, but you will have hard times to remove the lighting from those. (and this is needed if you want to combine multiple assets and/or different scans) It's an amazing tech, but my guess is that's it's rather useful to capture true 3D photos for personal usecases.
@NeoShameMan
@NeoShameMan 8 ай бұрын
It's a point cloud, unstructured. Just bucket the splat in voxel and traverse voxel from the camera, in a dull, then pass the ordered data to the renderer.
@NeoShameMan
@NeoShameMan 8 ай бұрын
@@Polygarden we aren't talking about mesh, it's not representing surfaces but lightfield. That's why a traversal using voxel make sense as a lightfield query. We aren't trying to reconstruct volume.
@dvelasco
@dvelasco 6 ай бұрын
Basically Maya´s Paint Effects applied to a photogrammetry-derived point cloud. Ingenious!
@Braindrain85
@Braindrain85 8 ай бұрын
Point cloud approaches are definitely very cool. And this one is even more pretty. Though they always come with a couple of downsides when it comes to lighting, animation, etc.
@Patapom3
@Patapom3 8 ай бұрын
SH is for Spherical Harmonics: it's the precomputed lighting environment.
@studioopinions5870
@studioopinions5870 7 ай бұрын
I think the best way to make use of this 3D Gaussian Splatting, is to be integrated with AR Glasses, and combine animated Characters into the scene. That way it will seem like a Virtual Holodeck of Star Trek. Maybe can use a kind of camera tracking feature of Blender or Unreal, and such to make it possible to put moving characters in a story like setting. Just my thoughts! Terry
@oleglinkov
@oleglinkov 8 ай бұрын
nah, without animation and adjustable lighting (day/night, dynamic lights) nobody going to change their entire pipeline. not in gamedev anyway. cool thing for virtual museums, home tours etc
@gamefromscratch
@gamefromscratch 8 ай бұрын
I think there is enough data that lighting could be implemented. That said, to mix it in with a traditional rendering pipeline, you'd end up with two lighting paths and that wouldn't be ideal.
@euden_yt
@euden_yt 8 ай бұрын
This came out like 3 months ago, there’s also an experiment that showed that you CAN change the lighting. I’ve also seen someone implement animations in augmented reality and displayed on an iPhone. That’s just in 3 months of progress. Think of GPT 3 in 2020 vs GPT 3.5 2022 vs GPT 4 today.
@shlokbhakta2893
@shlokbhakta2893 8 ай бұрын
@@euden_ytand just like that 4D Gaussian splatting dropped lol
@deluxe_1337
@deluxe_1337 8 ай бұрын
This is great for filmmaking.
@3govideo
@3govideo 8 ай бұрын
I’ve been following since Luma Ai got NeRF going and is amazing what we can get just by recording with our phones. Hope soon they can produce a lighter player. 🔥 thanks for the teaching on high-end-terms 🚀
@0rdyin
@0rdyin 8 ай бұрын
This tech can be a great alternative to traditional rasterization for backgrounds in interactive story games like 'Her Story'..
@WifeWantsAWizard
@WifeWantsAWizard 7 ай бұрын
It occurs to me that pairing splatting with traditional modeling in video games could be the wave of the future if the splats are restricted to distant background objects and detailed foreground objects are replace by LOD-correct alternatives as the player's avatar approaches.
@y1QAlurOh3lo756z
@y1QAlurOh3lo756z 8 ай бұрын
Could this be used to bake "bake" extremely high fidelity 3D scenes into point clouds and then splat them in runtime?
@altongames1787
@altongames1787 8 ай бұрын
I understand what you mean, but why would you wan't something less performant?
@drdca8263
@drdca8263 8 ай бұрын
@@altongames1787hm? The idea is that you could take some other rendering method that *can’t be done in real time*, and use it to create a Gaussian splatting scene which *can* be rendered in real-time. (People have tried this. It seems to work pretty well!)
@NeoShameMan
@NeoShameMan 8 ай бұрын
​@@altongames1787800 fps is quite performant in my opinion
@gabe2o2
@gabe2o2 8 ай бұрын
Pretty cool tech, but I do wonder how it would play with different shaders, fog effects, and lights we control in the scene. At least these are the immediate curiosities of mine. Shaders cause ima stylized boi, and shaders is how I accomplish this even when models are originally made in a more photo-realistic manner. Otherwise, seeing how drastic changes in lights and fog density mix with the tech would truly be an awesome little demo to see
@bcmpinc
@bcmpinc 8 ай бұрын
Im amazed at how it captures specular lighting. It's quite visible on the roof of the church.
@constantinosschinas4503
@constantinosschinas4503 8 ай бұрын
Gaussian Spatting seems to be just image mapping, on feathered particles. The texture of each blob changes according to the ciewing angle, picking the original photo that best matches the angle, or a close angle that does not blocks view to each splat. Filesize must be quite big, compared to traditional, static texturing.
@eddiewalpole
@eddiewalpole 8 ай бұрын
To answer the question posed in the thumbnail: unlikely
@seraaron
@seraaron 8 ай бұрын
God it looks like a dream when you get to the perifery of the scan
@Arisilde
@Arisilde 8 ай бұрын
This reminds me of how Media Molecule's "Dreams" works.
@Reavenk
@Reavenk 8 ай бұрын
I could definitely see this getting momentum for capture; way more practical than lightfields. But seems like an uphill battle for real-time uses. The lighting may be dynamic, but those dynamics are baked. And I'm guessing there is a lack of frustum culling, and all the particles need to be sorted to properly alpha blend?
@ScibbieGames
@ScibbieGames 8 ай бұрын
12:25 to be fair, you can do with much less if you don't require as high a quality. Also it's pretty likely a lot of speed can be traded for lower VRAM memory usage. To train to a reasonable 7000 "Iterations" you could probably get away with way less VRAM, and according to their own calculations it should be possible to train to reference paper quality with just 8 GB of VRAM, but that hasn't been implemented.
@vitordelima
@vitordelima 8 ай бұрын
And the sphere harmonics seem to be overkill for something that is just doing specular reflections most times.
@dukemagus
@dukemagus 8 ай бұрын
This will be crazy if you mix this tech with Google maps/earth data
@BrianDamageYT
@BrianDamageYT 8 ай бұрын
Kind of reminds me of the landscape rendering technique from the old Ecstatica games.
@dzft3w
@dzft3w 8 ай бұрын
It works well on mobile too! interesting
@shydun
@shydun 7 ай бұрын
i feel like this would be good if you could separate each prop, somehow type the dots to a dummy mesh and then decorate the level
@joloppo
@joloppo 8 ай бұрын
The spiky bits of light make it look exactly like when scenes load on assassin's creed. ... Which was basically loading into a sim in the context of the game. crazy
@tombruckner2556
@tombruckner2556 8 ай бұрын
Now I just need a Unity iOS plugin to create these models in real-time :)
@devilofether6185
@devilofether6185 8 ай бұрын
Gaussian splatting reminds me of the rendering engine in dreams
@hanniffydinn6019
@hanniffydinn6019 8 ай бұрын
Folks the real use here is cinematographers using real life backgrounds in unreal volume, as the film “the creator” proves is real life backgrounds are what really matters. This allows real life backgrounds to be used in volume filmmaking! Real-time NERF backgrounds is the future of volume filmmaking! 🤯🤯🤯🤯😎😎😎😎👍👍👍👍
@HasanRx7
@HasanRx7 8 ай бұрын
I wonder if this tech could be used as a reference template in 3D modeling programs to model real world environments instead of using 2D reference images. It will be extremely useful to model on top of it since it gives real world scale and helps prototyping the basic shapes and scale of the environment. I'm not keeping up with 3D tech for modeling lately so I'm not sure if there is already a similar solution out there.
@steven11101010
@steven11101010 8 ай бұрын
I think this is the more realistic use case - as a tool to improve current workflows.
@doomgb4994
@doomgb4994 8 ай бұрын
I'm rooted for this kind of usage as well. don't want the tech deprive my fun of modeling
@mitch9254
@mitch9254 8 ай бұрын
Sim racing and golf games, for example, have been doing this for at least a decade: using laser scanned point clouds as 3d reference to make a polygonal version of a real world location. If these splat method ends up producing results at least as accurate as lidar, but substancially cheaper and faster, then surely studios and modders will be all in, but again to use it as a reference.
@ozanyasindogan
@ozanyasindogan 8 ай бұрын
Looks amazing and practical. If they can use some NN on it to remove unnecessary particles and actually convert and split objects, that would be the end I believe.
@MrAuxiom
@MrAuxiom 7 ай бұрын
Wow that remind me so hard the Virtues in Cyberpunk
@DanielNistrean
@DanielNistrean 8 ай бұрын
M1 Max has a 24Core and a 32Core GPU. Just ordered one for mix of mobile/hobby game development. Continuing to watch the video..
@zahir3d
@zahir3d 8 ай бұрын
Tks for the video, is it possible to export it at the end to a 3d format? (fbx, obj..)?
@jonvdveen
@jonvdveen 7 ай бұрын
In a way, Gaussian Splats are like very large atoms - they come together to make everything in the scene.
@VideaVice25
@VideaVice25 8 ай бұрын
It's looking great but it also means things I rather see die will survive and look even better in the future. Just imagine Metaverse+Nanites+Gaussian Splats+VR... Hell awaits.
@jimj2683
@jimj2683 7 ай бұрын
This is the future of Google Street View in 3d!
@diligencehumility6971
@diligencehumility6971 8 ай бұрын
We are 100% gonna see something along these lines for future rendering in games
@BadBanana
@BadBanana 8 ай бұрын
No we're not. For movies yes For presentations yes Not for games Rendering like this is the opposite of why we have render pipelines It's unfeasible to ask a user to download hundreds of gigabytes of data per scene If you want to create gameobjects from these procures then im sure that can be done. But no You won't see games made like this ever
@eddiewalpole
@eddiewalpole 8 ай бұрын
I’d give it 1% tops for games specifically
@RedstoneNinja99
@RedstoneNinja99 8 ай бұрын
I wonder if you could map a scene even faster by just taping a high framerate 3d camera to a pole on your back
@NeoShameMan
@NeoShameMan 8 ай бұрын
A 360 camera is enough
@ardonnie
@ardonnie 8 ай бұрын
Seems like it’s just a matter of generating enough point clouds and pairing them up with descriptions before we can make generative models to create new seems just based on a description.
@ludologian
@ludologian 8 ай бұрын
thanks for sharing, I saw this weeks ago sorry in another repo sorry to not mention it.. I want to implement unity volume point editor similar to Nvidia workflow also it's a great thing to implement delighting tool and neural decompression algorithms. ( long project goal) inshallah
@megasupernewbie
@megasupernewbie 8 ай бұрын
just waiting for this to become standard monitor tech
@juanme555
@juanme555 8 ай бұрын
It looks very interesting, it will be very interesting to see if game developers choose to invest the time needed to properly fake photorealism through Gaussian Splatting, achieving high fidelity very at a very low processing power cost, or they will rather just use all the path tracing methods which will be a lot more convenient but also a lot heavier on the user's hardware.
@goodideas5659
@goodideas5659 Ай бұрын
Possibly a similar idea for use in gaming but a lot simpler is to get rid high polygon models for a basic box outline shape instead and just update the quads (2 triangles) on each face with angle adjusted photos depending on the players view angle. This has been done recently in the new Ultra Engine but I would use a a more perfected method of this so you don't see any clipping between photos and is totally smooth. Maybe its as simple as making a detailed sprite sheet and a shader to smoothly combine and move between images...? If an view direction is between 2 image angles then get the shader to create the correct image based on the 2 closest ones...I think some smart people could achieve this.
@MylezNevison
@MylezNevison 8 ай бұрын
Can you say Splats are kinda like multi shaped 3 dimensional pixels that are doing reverse virtual 3D pixel mappings (or 3D pixel projections) based on photographic data?
@MR3DDev
@MR3DDev 8 ай бұрын
Reason why this doesn't work for games (yet) is cause you can't clean it up, unless I am missing something this isn't geo and you will have a very small space.
@paulwhiterabbit
@paulwhiterabbit 8 ай бұрын
this could be a thing in the future but will only prosper in static 3d space viewing since a 3d polygon is much more practical in a dynamic real-time environment that games need. I just hope the tech to stream humongous file sizes faster than what we have comes sooner
@NunSuperior
@NunSuperior 8 ай бұрын
Novalogic omg. That's a name I haven't heard in a long time.
@claudiusraphael9423
@claudiusraphael9423 8 ай бұрын
"It has a price-tag of a $137.43." -- "Yeah, we'll not be using that today ..."
@MonsterJuiced
@MonsterJuiced 8 ай бұрын
It's slow because it uses the particle system to render. It's the with the unreal engine one. Each splat is rendered using Niagara and Niagara, even on GPU, gets real slow when rendering point clouds or particles systems above 1mil points.
@vitordelima
@vitordelima 8 ай бұрын
The original demo uses screen space 2D rendering.
@MonsterJuiced
@MonsterJuiced 8 ай бұрын
@@vitordelima Original demo of what? Made by whom? If you watched the video the guy even confirms it's using the particle system to render the points and this scene is over 3million points. Why did you upvote yourself?
@JunglismVFX
@JunglismVFX 8 ай бұрын
I wonder can cache them though, I’ve had quite a few intense Niagara systems running in a level but I cached them and the worked well, this for video production. Not sure how it would in game dev
@vitordelima
@vitordelima 8 ай бұрын
@@MonsterJuiced Of the technology being explained in the video. I didn't, you are just insane.
@Gigacat2137
@Gigacat2137 8 ай бұрын
That reminds me of how the models work in Dreams.
@R1po
@R1po 7 ай бұрын
Can't really imagine a use in games. But for VR chatrooms, real estate buros, or VR sightseeing.
@WolfCatalyst
@WolfCatalyst 8 ай бұрын
Bro, your computer chugs on everything. Most people can probably get 60fps 😂 UNF did a tutorial a while back called make a full game in 2 hrs (or something similar) and he used one of the free monthly cities. He was chugging too but then went into the properties of the UE editor, changed a couple settings and it was perfect. Don't remember what he changed, but there's definitely a fix. Looks like I found a new use for my drone though. Thanks!
@ScibbieGames
@ScibbieGames 8 ай бұрын
It's an unoptimized, experimental renderer for a generally unsupported rendering pipeline which is implemented on top of Unity.
@atsignsarestupid
@atsignsarestupid 8 ай бұрын
He probably changed "virtual shadow maps" to regular shadow maps, that's a one-click triple the speed of Unreal engine button right there.
@4.0.4
@4.0.4 8 ай бұрын
One thing I'm not sure I understand is if you could combine scenes, or how it handles reflections other than creating a mirror universe room. Like if you have two mirrors on the back of each other.
@ScibbieGames
@ScibbieGames 8 ай бұрын
It would show what was visible on the pictures it was 'trained' on.
@drdca8263
@drdca8263 8 ай бұрын
@@ScibbieGamesMakes sense, but, it does make me wonder: what does it do if you record a scene where there’s a mirror that isn’t against a wall, and (in the training footage) have the camera go around the mirror? How will the quality of reflections in this case compare to the quality of reflections when I’m the mirror is up against a wall and it can use the “treat the mirror like a portal” trick?
@NeoShameMan
@NeoShameMan 8 ай бұрын
​@@drdca8263the gaussian has directional color, so backface mirror will probably duplicate the color of the background where they are not seen. But that's a mice observation. Remember these don't represent surface and volume but light rays.
@drdca8263
@drdca8263 8 ай бұрын
@@NeoShameMan Sorry, I don’t think I understand quite what you mean by “duplicate the color of the background where they are not seen”. ... also, in order to represent occlusion, don’t the splats kinda sorta also have to represent volumes? That’s what the opacity value handles, isn’t it? Edit: to be clear, I do anticipate it still working somewhat when it can’t make things such that you can “go into the mirror” on account of there being views in the training footage at the locations that “going into the mirror” would take you, and which don’t look like what the mirror world would look like there, I’m just expecting that the quality would probably be somewhat lower, and wondering by how much.
@NeoShameMan
@NeoShameMan 8 ай бұрын
@@drdca8263 they use spherical harmonics, ie directional colors. These are used in game with lightprobe to illuminate a scene. They don't represent volume, they represent light rays from the source image, basically its like they are fuzzy blurry cubemap, the overlap of all them reconstruct the view of a source image. It's like you took the 2d pixels of the source image and move it to where it's the most probable, and many pixel at the same place merged into a cubemap.
@stonekase
@stonekase 8 ай бұрын
Finally
@cygnos4612
@cygnos4612 7 ай бұрын
I use the Luma AI plugin for unreal engine. Super easy to use and free.😊
@whtiequillBj
@whtiequillBj 8 ай бұрын
This reminds me of the system that is used in Dreams by Media Molecule. I know that its Playstation specific but, maybe if there is enough push we can get Dreams onto the PC eventually.
@synapse349
@synapse349 8 ай бұрын
i wonder if one could use nerf to build the point cloud to use for splatting...
@astrahcat1212
@astrahcat1212 8 ай бұрын
All we gotta do is kick down those polys and make it able to make stylized non-photo-realistic 3d models and we'll be good.
@namelessalias0007
@namelessalias0007 8 ай бұрын
Imagine combining this tech with what was used in gta 5 enhancing photorealism demo that came of a couple years ago...
@SomeNerd361
@SomeNerd361 8 ай бұрын
So basically it's a neat party trick that was invented 20-ish years ago but couldn't really be implemented because we didn't have the horsepower but now we can brute-force it with horsepower but also doesn't really improve anything over existing techniques. ...sounds like ChatGPT
@jayzeus9738
@jayzeus9738 8 ай бұрын
Mmmmh this take is based out of space.
@NeoShameMan
@NeoShameMan 8 ай бұрын
Not really, 800fps for hyper realistic scene is a massive improvement. The technique existed in 2d but not in 3d, it's not just horsepower, it's realizing how to do it in the first place.
@insertoyouroemail
@insertoyouroemail 8 ай бұрын
It might be useful asset bake target.
@JonoSSD
@JonoSSD 8 ай бұрын
I've been reading about this for a while and it looks like real innovation. Unlike real time ray tracing, something extremely taxing on the GPU and that was pretty much forced down our throats by a company out of ideas on how to charge thousands of dollars for their products that aren't worth half of their asking price in real performance.
@poetryflynn3712
@poetryflynn3712 8 ай бұрын
Raytracing actually came from independent scientists in the 80s, we just needed the fire power to catch up for the consumer.
@JonoSSD
@JonoSSD 8 ай бұрын
@@poetryflynn3712 I know, I even read a few of the scientific papers about it. It's really interesting stuff. The problem was Nvidia forcing the technology onto the consumer well before it was ready as this "crazy new thing that totally makes new GPUs worth double last gen" even though we're now 3 generations in and most of their lineup can't even handle it without upscaling (which was essentially invented because games could barely reach 60 fps on flagships when using ray tracing at the time. Remember the RTX 20 series?). I'd say it'll be some 4~5 more generations before real time ray tracing can become viable without upscaling. Nvidia (and AMD, which does very little besides copy its competitor) should be focusing working more closely with developers to better optimize current games, so they don't suck up 32 gigs of ram all the time and need 200 GB of storage to run. But no, that's not flashy enough to sell thousand dollar giant pieces of inefficient heatsinks.
@jmvr
@jmvr 8 ай бұрын
@@poetryflynn3712 and Gaussian splatting is in a similar boat, except that the firepower has already been there for a while, just that it wasn't used for consumer applications until just now. Usually it was used for mapping out stuff like CT scans, and the original paper from 1993 ( web.cse.ohio-state.edu/~crawfis.3/Publications/Textured_Splats93.pdf ) used it to map out wind and clouds
@philbob9638
@philbob9638 8 ай бұрын
Real time ray tracing is not something that was forced down your throat by a company out of ideas, it's a technology that has been promised and pursued for decades and will continue to be pursued for a while yet. What you have now is barely scratching the surface.
@vitordelima
@vitordelima 8 ай бұрын
@@philbob9638Then hardware accelerated realtime raytracing was forced down everyone's throats by a company out of ideas.
@nocultist7050
@nocultist7050 8 ай бұрын
I just want to use it on non-denoised raytracing rendering output frames. with depth pass for spatial data. Just let me see if it works...
@brodriguez11000
@brodriguez11000 8 ай бұрын
Hu-po has a two and a half hour YT video on all the details.
@BIFLI
@BIFLI 8 ай бұрын
Has anyone done this with the Zapruder film yet?
@AshnSilvercorp
@AshnSilvercorp 8 ай бұрын
seeing a 3D algorithm technique beat out the "mystical magic of AI" right as everyone is bum rushing to try to use it on everything as if it will solve every problem is refreshing.
@bigali69190
@bigali69190 8 ай бұрын
"Splat" is the future.
@sunbleachedangel
@sunbleachedangel 8 ай бұрын
You can make a cool psychedelic game with this
@prozacgodgamedev
@prozacgodgamedev 8 ай бұрын
I think a really interesting real-world use case would google maps, google maps is already kinda terrible up close... so they can't make it worse! haha - but no they already have a number of photos and it would probably work 10 times better for lots of scenes, it's "just a data processing issue" ... ish ;)
@Seacat17
@Seacat17 8 ай бұрын
Great. But what about FPS future?
@impheris
@impheris 8 ай бұрын
Thanks to Aras for his contribution...
@linuxrant
@linuxrant 8 ай бұрын
If that would be available in Godot, I would immediately implement it in my project I have at least two really cool ideas how to use this tech. I wonder if the splatting could be switched from gaussian into other methods of splatting, for example...paint brush splatting... ifkywim...
@mmmuck
@mmmuck 8 ай бұрын
I just want a tool to convert these to polygonal mesh and texture
@zodchiy3d
@zodchiy3d 8 ай бұрын
Has anyone tried running Unity in VR mode with this plugin? In theory it should work. I'll have to give it a try.
@TiagoTiagoT
@TiagoTiagoT 8 ай бұрын
Wait, what do you mean NeRF's are not real time? I thought I heard of even some VR stuff using NeRFs, and I've definitely seen NeRFs rendered on web pages interactively... I know the original was slow, but there were many improvements on the techniques after the original paper was published...
@leeoiou7295
@leeoiou7295 8 ай бұрын
How does this handle collisions?
@hirkdeknirk1
@hirkdeknirk1 8 ай бұрын
Very impressive. Even though this species may become extinct again, it is nice to see that evolution is continuing.
@spooderderg4077
@spooderderg4077 8 ай бұрын
dreamgaussian is an ai tool you can run on your pc to create models in minutes.
@SnakeEngine
@SnakeEngine 8 ай бұрын
I don't see it go anywhere for games, just like "unlimited detail" stuff.
@Homiloko2
@Homiloko2 8 ай бұрын
Is it even possible to apply lighting on this? I don't understand much from photogrammetry but it seems pretty much imutable, e.g. can't move objects, can't alter lighting and so on, which greatly reduces the applications for this
@gamefromscratch
@gamefromscratch 8 ай бұрын
Yes and no I believe. Yes, in that you have positional and color data and it's being rendered in real-time by the GPU. You could certainly implement virtual lighting (assuming you are much much much better at math than I am). No, in a traditional pipeline, like in Unity. This isn't rendered along side the rest of the scene as I understand it, more in parallel. So if you added a light in the Unity scene, nothing would happen to the Gaussian Splat you've imported. I do think you could make it work, but you'd essentially have two parallel lighting paths (I think).
@ScibbieGames
@ScibbieGames 8 ай бұрын
The colors and reflections are stored inside "Spherical Harmonics", they hold the color value when looked at from different angles. You could technically, probably, somehow, bake in the lighting from your scene into these harmonics, but that would still be immutable. To do that in real time, for some million points in space, It might be a bit much, perhaps you'd require some sort of fragment shader, but for splats. lol
@vitordelima
@vitordelima 8 ай бұрын
@@ScibbieGames Surfels, which are similar to this, were used for realtime global illumination over triangles in the past. Still it would require the use of lighting probes around clusters of splats or other simplification.
@skeleton_craftGaming
@skeleton_craftGaming 8 ай бұрын
I can't wait until I can afford a $10,000 a6000... Though I don't even think the a6000 has 24 Gigs of VRAM
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 117 М.
How do non-euclidean games work? | Bitwise
14:19
DigiDigger
Рет қаралды 2,4 МЛН
NERF WAR HEAVY: Drone Battle!
00:30
MacDannyGun
Рет қаралды 21 МЛН
Luck Decides My Future Again 🍀🍀🍀 #katebrush #shorts
00:19
Kate Brush
Рет қаралды 8 МЛН
We Got Expelled From Scholl After This...
00:10
Jojo Sim
Рет қаралды 71 МЛН
Khóa ly biệt
01:00
Đào Nguyễn Ánh - Hữu Hưng
Рет қаралды 20 МЛН
The Problem with Wind Energy
16:47
Real Engineering
Рет қаралды 445 М.
How Much Does Using Unity Cost Now?
12:46
Gamefromscratch
Рет қаралды 25 М.
[CVPR'24 Highlight] Gaussian Splatting SLAM
7:28
Dyson Robotics Laboratory at Imperial College
Рет қаралды 21 М.
3D Gaussian Splatting - Explained!
8:28
Creative Tech Digest
Рет қаралды 77 М.
Learning Unreal Engine in One Month to make a Game!
15:25
Will Hess
Рет қаралды 48 М.
3D Gaussian Splatting On Android (and iOS) | What Is 3DGS And Why It's A Big Deal
8:01
KIRI Engine - 3D Scanner App
Рет қаралды 19 М.
How Games Have Worked for 30 Years to Do Less Work
23:40
SimonDev
Рет қаралды 1,2 МЛН
NOT ADOBE!  --  The Best Adobe Alternatives in 2024
25:15
Gamefromscratch
Рет қаралды 52 М.
Photogrammetry / NeRF / Gaussian Splatting comparison
23:30
Matthew Brennan
Рет қаралды 152 М.
Ждёшь обновление IOS 18? #ios #ios18 #айоэс #apple #iphone #айфон
0:57
Secret Wireless charger 😱 #shorts
0:28
Mr DegrEE
Рет қаралды 2,1 МЛН
cute mini iphone
0:34
승비니 Seungbini
Рет қаралды 5 МЛН