No video

Unlocking the Potential: Mastering 3D Gaussian Splatting using Pre-rendered Images

  Рет қаралды 44,223

Olli Huttunen

Olli Huttunen

Күн бұрын

Пікірлер: 233
@gridvid
@gridvid 11 ай бұрын
I'm so glad you did this Proof of Concept, it looks absolutely amazing... and thanks for the shout out 😊 I hope this will be integrated in every 3D program at some point. There is so much room for optimization in this process. For example the point cloud generation could be done automatically inside the engine. Also, there are already early tests of dynamically lighten and animating Gaussian Splats. We've got an interesting tech right here that could revolutionize the way we render 3D scenes all together 😊 Keep up the fantastic work... 😊👍 Btw... I wonder how fictional or cartoony content would turn out 🤯
@khkgkgkjgkjh6647
@khkgkgkjgkjh6647 11 ай бұрын
This is pretty insane. I think in theory it should be possible to render directly into 3d gaussian splats, without ever having to do any point clouds or training process.
@m.sierra5258
@m.sierra5258 11 ай бұрын
Not sure about training, but the point cloud could definitely be generated directly from the geometry. Remember gaussian splats are an extension of a point cloud, there is no way to avoid a point cloud.
@AlexTuduran
@AlexTuduran 11 ай бұрын
Sierra is right. The points are at the basis of splats. Splats even keep their position and color while fitted.
@AMessful
@AMessful 11 ай бұрын
But if you use the geometry as a point cloud, does it have the render quality on it? F. E. the reflections and refractions of the water bubbles?
@productjoe4069
@productjoe4069 11 ай бұрын
@@AMessfulI think you’d need to convert the mesh into a textured form first and derive the point cloud from that (using edge detection or similar). This cloud could probably be pruned a bit (more pruning means lower quality but faster). Then you could path trace from each point to get lighting/reflections etc. to set the spherical harmonics coefficients of the colour components. Just guessing here (I’m not a 3D graphics researcher) and I don’t know if that’s any more practical than just using carefully chosen probe cameras and doing the entire original pipeline. One thing that would probably be better though would be fewer floaters and more accurate edges (because we know exactly where in 3D each point is by transforming the texture coordinates by the geometry’s transform).
@loookas
@loookas 11 ай бұрын
I thought that this videos was about that
@EmanuelSer
@EmanuelSer 11 ай бұрын
This is going to be a game changer! Clients always want to change camera movements like it doesn't take hours or even days to do so
@cyber_robot889
@cyber_robot889 10 ай бұрын
The answer was on the surface all this time! This guy idea is genius, and thank you that you're point it out and show how to do!!
@havocthehobbit
@havocthehobbit 11 ай бұрын
Thats a bluddy briliant use case thfor GS that i never thaught possible.
@badxstudio
@badxstudio 11 ай бұрын
Olli Fantastic video and great show of the usecase! Even with NeRFs, a friend of ours managed to get a 3D scene created from a Spiderman Game and it was awesome! We were thinking of testing that out with Gaussians and see how it would turn out. Clearly it is going to look awesome!!
@BlackAladdin_
@BlackAladdin_ 11 ай бұрын
Yeah y’all definitely should glaze our life out with that. Yall the reason why I know about nerfs.
@tyler.walker
@tyler.walker 11 ай бұрын
Ay, Bad Decisions! I just got finished watching you guys' video on this tech! It was really great, too! Scanning a 3D scene from a game is something I've wanted to try since I first heard about NeRFs, but I haven't known the best way to go about doing it. Has your friend made a video or documented how he did it?
@badxstudio
@badxstudio 11 ай бұрын
Hey mate
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
You can also check my video abot that topic here: kzbin.info/www/bejne/fZ-tgHmYetyLqNksi=VBTzPpHA3FB8VSox
@GooseMcdonald
@GooseMcdonald 11 ай бұрын
Do it with the first Matrix movie gun scene :)
@fhmconsulting4982
@fhmconsulting4982 11 ай бұрын
This 3DGS actually could be the "front end" to so many tools that have limited interfaces because the finished visual is a great approximation of reality. Every geographic information system, facility management system, BIM\Fabrication package & surveying application could use this technology. The limiting factor for so long has been differing file formats and interfaces. This could be the 3d printing press that makes digital data almost agnostic. 3D has come a long way since the 1980's but is still the same method and tools, but faster. If you use your room as an example you could have all the services, construction materials, fabrication plans, building approvals, energy calculations etc all use the same 3DGS format to display information on any pixel generating viewer. Exciting times.
@ArminDressler
@ArminDressler 10 ай бұрын
Olli, this is a fascinating technique and the results in the real-time animation look amazingly good. Especially when you consider that it was the first attempt. Congratulations!
@semproser19
@semproser19 11 ай бұрын
I'm loving these. The most obvious gaming usecase would be realtime cinematics. Cinematics are often one of two things: - an ingame cinematic with obviously game-level graphics/lighitng/models, or - a prerendered cinematic with much higher quality assets and methods that can't be done realtime. If you're using a mix of gaussian splatting for the environment and then spending all your traditional rendering power on your moving parts/characters, then you could have cinematics nearly as pretty as pre-rendered cinematics. And that's just until we get animated splats. This way you can have faultless splats too, because the camera would only ever use the path the splat input references took.
@AlexTuduran
@AlexTuduran 11 ай бұрын
Natural next step. We'll done! I'm already attempting to try to fit the splats using genetic algorithms. Fingers crossed.
@resetmatrix
@resetmatrix 11 ай бұрын
Great job, I think tihis is the future of 3d realtime graphics, and sure a new evolution of 3D gaussian will incorporate animation, with animated elements inside the scene
@pabloderosacruz
@pabloderosacruz 11 ай бұрын
this is the future of 3d
@technomaker777
@technomaker777 11 ай бұрын
I was thinking about this as soon as you uploaded video about GS. Very interesting!! Great videos!
@JHSaxa
@JHSaxa 11 ай бұрын
Wow! This is amazing! One of the most impressive things I've seen all year. Thanks for sharing this experiment.
@railville
@railville 11 ай бұрын
Fantastic. Makes me wonder if you could go to a film like the Matrix and use a bullet time scene as your image base and render that to splats
@tokyowarfare6729
@tokyowarfare6729 11 ай бұрын
I'm amazed how casually you record video and you get theese cool ressults. THis test was also super super cool.
@diaopuesto7082
@diaopuesto7082 11 ай бұрын
I now have infinite ideas thanks to this video.
@sunn2000
@sunn2000 10 ай бұрын
I was thinking about this ... thanks for posting!!! I know what I'm doing this weekend!? I'm in 3dsmax and corona.
@RandomNoise
@RandomNoise 11 ай бұрын
Well, this is something interesting and very useful
@harriehausenman8623
@harriehausenman8623 11 ай бұрын
Fascinating idea and great video! Wonderful accent 🤗 The effort put into clear speech is much appreciated! A real problem for most "native speaking" channels 😉
@DaveBrinda
@DaveBrinda 4 ай бұрын
This is amazing…the unlocks so many creative possibilities. thanks for sharing!
@ThomasAuldWildlife
@ThomasAuldWildlife 11 ай бұрын
This is getting Nuts!
@Erindale
@Erindale 11 ай бұрын
Fantastic experiment! It'll be fantastic once we can go direct from scan or DCC into real time gaussian splats. I wonder how we could do dynamic lighting within Gaussian Splats though. Right now, it would probably be faster to bake lighting information into the textures of your 3D so you can get Cycles style lighting in real time in Eevee. Looking forward to seeing how this tech progresses!
@konraddobson
@konraddobson 11 ай бұрын
Games were my first thought too. Very interesting!
@Because_Reasons
@Because_Reasons 11 ай бұрын
Looking forward to seeing how this progresses!
@lescreateurs3d
@lescreateurs3d 11 ай бұрын
Amazing, thx for your experimentations !
@mujialiao6088
@mujialiao6088 8 ай бұрын
This is a game changer for the future of VFX industry
@danielsmithson6627
@danielsmithson6627 11 ай бұрын
THIS VIDEO WAS MADE FOR ME!!!!
@davebrinda8575
@davebrinda8575 5 ай бұрын
This is amazing! 🤯 Thanks for sharing....will follow with interest!
@realthing2158
@realthing2158 11 ай бұрын
Great test, this is the kind of content I'm excited about right now. I'm holding off trying it myself though until I can get a 4090 graphics card.
@ALERTua
@ALERTua 11 ай бұрын
OK, so this might be a game-changer for the interior design. My wife renders her Revit interiors using Corona. She positions the camera, sets lighting, and makes a rendered shot. Corona renders only on CPU so this takes up to one and a half hours for one shot. There are at LEAST 20 shots per project, which means 30-40 render iterations. If she could instead capture the whole project and just fly with the camera and make screenshots of such perfect quality, this would drastically lower the time and electricity that it takes to finish a project visualization. I would love to see how your project turns out. I can see big commercial (or open-source) potential in it! Would be glad to help if this goes open-source! Would gladly consider buying it for my wife if it is commercial!
@Geenimetsuri
@Geenimetsuri 11 ай бұрын
Brilliant stuff! Having well above real time rendering of a complex "photorealistic" 3D landscape is nothing less than sorcery! I also wonder what the uncapped FPS would have been.
@Bluetangkid
@Bluetangkid 9 ай бұрын
This is really cool - I'm sure someone will move to generating these splats directly from blender and avoid the loss of detail from generating frames then training the splat. The renderer knows the geometry, camera pos, etc. and could provide much more useful detail when creating each surface instead of inferring. Interested to dive deeper in to this.
@matslarsson5988
@matslarsson5988 11 ай бұрын
Very interesting stuff. Keep up the good work!
@3dvfxprofessor
@3dvfxprofessor 11 ай бұрын
This idea for using #GaussianSplatting is so obvious. Brilliant! #b3d
@sugaith
@sugaith 11 ай бұрын
impressive
@estrangeiroemtodaparte
@estrangeiroemtodaparte 11 ай бұрын
Awesome content!!
@nolanzor
@nolanzor 11 ай бұрын
very cool!
@ilyanemihin6029
@ilyanemihin6029 4 ай бұрын
Amazing idea and implementation, thanks!
@OllieNguyen
@OllieNguyen 11 ай бұрын
amazing !
@merion297
@merion297 11 ай бұрын
Yay, finally, someone tested it! 🙏 Now replace the Blender Render(!) step with a generative AI process, like Runway ML or ControlNet, and use just a 3D OpenGL output (as a "3D Skeleton") from Blender where colors correspond with different object prompts for the generative AI. Or any similar process you can make. Consistency is key though.
@marceau3001
@marceau3001 11 ай бұрын
Very interesting I indeed believe the point cloud could be directly produced from the 3d geometry tesselated to a high poly count and the texture baked to vertces. It should be faster than rendering all the images and their wouldn't be any forgotten or occluded portions of the model. Thank you for your good videos
@mortenjorck
@mortenjorck 11 ай бұрын
The part that’s missing is still the path tracing. Though maybe there’s a way to bake that in as well? Failing that, I wonder if there’s a way to write an algorithm to calculate the optimal camera path through a scene to maximize coverage while minimizing redundancy.
@TimmmmCam
@TimmmmCam 10 ай бұрын
@@mortenjorck Yeah I think you're right, but isn't like 99% of cycles render time effectively just calculating lighting? I can't see why you wouldn't get equally good results just by using Eevee with baked lighting.
@r.m8146
@r.m8146 11 ай бұрын
awesome
@farhadaa
@farhadaa 11 ай бұрын
I was thinking this same thing, where some intense renders could be viewed in real time.
@darviniusb
@darviniusb 11 ай бұрын
I wonder if the old Iradiance map could be converted to Gaussian splats or same technique could be used to generate a perfect gs scene.
@dialectricStudios
@dialectricStudios 11 ай бұрын
Siiiiiiick. I love the future
@LianParma
@LianParma 7 ай бұрын
Very cool!!! Would love to try processing the room scene on my 3090 to se if it gets do 30k steps.
@KyleCypher
@KyleCypher 9 ай бұрын
I would love to see someone use the 3d model to inform the machine learning model about generated point cloud in order to help remove noise/ghosts, and to make the models more accurate. or perhaps create the pointcloud directly from an engine
@hamidmohamadzade1920
@hamidmohamadzade1920 11 ай бұрын
wow what a great idea
@NeoAnguiano
@NeoAnguiano 10 ай бұрын
kinda imagine there must be a more direct way to convert from 3d model to the "point cloud" skipping the render , but it indeed is very promising technique
@MrGTAmodsgerman
@MrGTAmodsgerman 11 ай бұрын
Basically future games could render a high quality interior simular to traditional texture/light baking such as in Unreal Engine, but then use it as Gaussian Splatting while at the same time having a defined playground that can't be exited because Gaussian Splatting will then blur the area. This would offer a overall higher quality photorealism experience then Unreal Engines light bakes for ex. . But the question is if that could be merged with interactions inside that world that it looks right. As the light on interactable props have to be rendered somehow. Also over the years the actual light baking that looks photorealistic haven't been used in games, mostly by archviz companys only. Interesting what could this offer then. I guess film production or product visualisations benefit more from that.
@pocongVsMe
@pocongVsMe 11 ай бұрын
awesome content
@antoinelifestyle
@antoinelifestyle 9 ай бұрын
You are a genius
@impactguide
@impactguide 11 ай бұрын
Hey Olli! Thanks for the super cool videos you make, they are always a treat! Would you have an idea what the lowest amount of VRAM and CPU power are, necessary to view a Gaussian splatting scene? Or if there is still room for optimization on that front? I have a slight fascination for "beautiful graphics through optimization and new technology on old hardware", and I would think it would be super cool if something like this could be run on (very) low end hardware, like an older generation gaming console.
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yeah! I haven't tried it yet, but it seems that Gaussian splatting can be viewed even on older and less powerful devices. Creating Gaussian Splatting model it requires a RTX level card. But just for watching a pre-trained model it could work on less powerful PC also. SIBR Viewer for exsample does work on other cards as well. I tried it on my nephew's gaming PC with a basic GTX card. And it worked very well! It would be interesting to find out how low level of machines the viewer app would still work on.
@NeoShameMan
@NeoShameMan 11 ай бұрын
UNless you use compute, it's equivalent to a massive particles only scene. Which mean tons of overdraws. If we can fit the code of a single splat inside the VU2 of the ps2, it's probably possible on ps2 with a cap at 4 millions particles. lol Overdraws, therefore fillrates, it's the main limiter for non compute (which would rasterize directly). Also current scene are probably the equivalent of using RAW 4k texture density everywhere. We can probably find way to optimize and compress scene, by first reducing "resolution" by dropping some particles, then find way to compress and sort the remaining into a nice format. If we can get rid of transparency too, I wonder how bad are splatting without transparent shapes, might be good enough for some use case. If I were to try, I would first sorts particle into a 3d grid, such that we can march the grid, per rays, and only get the relevant splats, then I would try to guess how to mipmap every sets in a grid, which could then load based on distance. Then I would find a way to trim the grid from empty spaces, then find a compression method for chunks of the grid.
@impactguide
@impactguide 11 ай бұрын
@@NeoShameMan By coincidence, I saw an article on Hackernews this morning explaining that using clustering, it is possible to quite easily reduce the file size of a Gaussian Splatting scene by about a factor 10 without much loss of quality. You can reduce even further, but then you start noticing artifacts, although I still think the images look pretty good. The author notes that in the bike demo scene from the original paper, 4 million gaussians were used... I haven't read the original paper yet, nor do I know a lot about real time rendering, but if a gaussian splat equals a single particle, then 4 million particles + reduced file size might not be outright impossible on the PS2... although you would probably have to also optimize the renderer, like you described. I don't think you can post links on youtube, but the article name was "Making Gaussian Splats more smaller". The idea of the author seemed to be to reduce the file sizes, so that gaussian splatting scenes could be used in Unity. Pretty cool!
@NeoShameMan
@NeoShameMan 11 ай бұрын
My assumption about gaussian on ps2 is based on gamehut's video: " Crash & LEGO Star Wars "Impossible" Effects - CODING SECRETS " ytcode: JK1aV_mzH3A and Aras unity's implementation which use billboards, gaussian rendering on a billboard is analogous to a stretch image, so we would simply stretch the billboard (using the matrix) and not do complex gaussian rendering, we would let the alpha saturate by itself@@impactguide But I think the bottleneck would be bandwidth to pass the unique position, in the video it's all procedural, so it can achieve peak more easily, and I have no idea how feasible is the spherical harmonics, but a close approximation should be possible anyway.
@user-bs3jd2hj3z
@user-bs3jd2hj3z 10 ай бұрын
Very good, I must try it python stuff on my 4090
@ATomCzech
@ATomCzech 11 ай бұрын
Very interseting worklow. And great idea to try this way. Btw. Unreal Engine can compute lightmaps for every surface, calculate how much light will be on every surface by using path tracing and then you can basically change location of camery and get instant raytraced result. It of course don't work when they are moving objects or light in the scene. But for static scene like this it is awesome. Would be great if Blender could to the same.
@Damian-rp2iv
@Damian-rp2iv 10 ай бұрын
Instantly though about this on the first video I saw, especially as here you've took the "classic" route, but what if point could be trained separately then put together with some kind of 3D tools (a bit like the simple color to image IA we had few time ago)? I really think that the capturing still image to cloud point step are not the big thing (as it's the same as classic photogrammetry obviously) and that 3DGS could lead to even more crazy result with another kind of source of data But first the obvious next step would be to generate automatically all the needed point of view and go straight from 3D rendering with maxed ray casting stuff to cloud points as the 3D engine would be able to identify points by itself. I wonder how much better points would help this tech
@harriehausenman8623
@harriehausenman8623 11 ай бұрын
Shouldn't it be easier* to generate the point cloud directly from the 3d model? 🤔 *(veeery relative word here 😄)
@user-jk9zr3sc5h
@user-jk9zr3sc5h 11 ай бұрын
I agree, I like nerfs but couldnt you just run this in real-time using Unreal? I'm unsure of the benefit of NeRF here
@AnotherCyborgApe
@AnotherCyborgApe 11 ай бұрын
@@user-jk9zr3sc5h I see it as a "two papers down the line" situation. This is the best nerf-like thing available today, significantly better than what was available 2 months ago. On the practical side of things, we have path tracing combined with AI-based denoising/upscaling/ray reconstruction that's made its way into mainstream games, and that's likely to remain the "sane" path to high quality real time rendering for a while. But it's easy to start daydreaming about where successor technologies of gaussian splatting might take us, and this video gives us a little taste of what could be, even with its awkward "ok let's just compute 260 renders and pretend we don't know the geometry and compute it again" approach.
@NeoShameMan
@NeoShameMan 11 ай бұрын
It's very relative, gaussian splat encode light rays not mesh surfaces, the issue is the placement of the gaussian, the gaussian are triangulated using 2d images, we can probably triangulate from casting sampling rays from a sampling points and try to find a minimizing functions to figure out best placement. 3dGS are very similar light probe volume, with the caveat that it's the superposition of gaussian splat that create the final colors, I see the superposition as a big problem, but placement is probably CLOSE to ambient occlusion problem.
@harriehausenman8623
@harriehausenman8623 11 ай бұрын
👍@@NeoShameMan
@mat_name_whatever
@mat_name_whatever 11 ай бұрын
It's so strange to see a rendered image being turned to a point cloud heuristically rather than with the already available 100% accurate depth buffer information
@lowellcamp3267
@lowellcamp3267 10 ай бұрын
With computer-generated geometry like in this example, I wonder if it would be better to ‘manually’ place gaussians according to real scene geometry, rather than using photogrammetry to place them.
@cjadams7434
@cjadams7434 10 ай бұрын
This is where an M2 Mac Studio with 192gig shared vram and a “neural engine” has an advantage here
@longwelsh
@longwelsh 11 ай бұрын
Great video. I'm surprised by the lack of ports of these tools to Mac since the unified memory architecture means one could give the graphics 50-60GB of free memory. I still have hopes for projects built on Taichi as hopefully their backend could be ported from CUDA to Vulkan.
@kewa_design
@kewa_design 11 ай бұрын
Yeah it’s very weird
@italomaria
@italomaria 11 ай бұрын
I am absolutely fascinated by the potential of this stuff. I had a couple questions - if anyone has ideas or answers that'd be awesome. 1) Does Gaussian splatting work only on freeze frame moments, or would it be possible to record a real-world event (say a ballet dancer) from multiple fixed angles, then playing it back in 3D? 2) Would it be possible to integrate this with AR or VR and able to walk around pre-recorded events?
@NeoShameMan
@NeoShameMan 11 ай бұрын
1) yes but it's costly, you will have to capture every frame with enough multiple views, each frame would cost the same of a single 3dgs, there is probably way to compress, but you would have to invent it, or hired a programmer. 2)yes it's been done, the problem is the raw file size, see if using less splat can help retain a good enough quality that fit the memory.
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
I recomment to follow Infinite-realities on Twitter (X). They have done experiments on animated sequences on Gaussian Splatting. And they have some special custom version on that SIBR viewer which I would be very interested to get my hands on. Check for example this: x.com/8infinite8/status/1699463085604397522?s=61
@italomaria
@italomaria 10 ай бұрын
@@OlliHuttunen78 Oh man, thanks for that recommendation. Love your work, super excited for all the insane stuff this new tech is opening up.
@DREAMSJPEG
@DREAMSJPEG 11 ай бұрын
Hi Olli, love your work - really appreciate your experiments. I have a somewhat similar question that you address in this video but with pre-existing point cloud data. Do you think it would be possible to create a gaussian splatting from point cloud data like .las, instead of images.
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
I’m not sure. Training needs also the source images to generate Gaussian Splatting model. The point cloud itself is not enought. This source code uses .ply point cloud files. But I think that any point cloud format can be convert to .ply format.
@DREAMSJPEG
@DREAMSJPEG 11 ай бұрын
@@OlliHuttunen78 Thank you for the reply - appreciate it :)
@SuperCartoonist
@SuperCartoonist 11 ай бұрын
Maybe in the future 3D over-the-air broadcast will exist or 3D live streaming surveillance.
@cobracoder6123
@cobracoder6123 11 ай бұрын
It would be absolutely incredible if this could be incorporated with VR
@inteligenciafutura
@inteligenciafutura 11 ай бұрын
I already had that idea, and thanks to you, now I know I can do it
@carpenterblue
@carpenterblue 11 ай бұрын
Gosh, I don't care for realism, what i want is to hand paint the world in 3D!
@Mr_i_o
@Mr_i_o 11 ай бұрын
over 9000!
@FredBarbarossa
@FredBarbarossa 11 ай бұрын
Really interesting, I need to test this at some point, do you know if this works with AMD cards as well?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Well, this Gaussian splatting generation relies very heavily on CUDA, so I don't really think it would work on AMD cards. At least not yet. But the pre-calculated Gaussian model can at least be viewed on AMD cards as well. At least I would think so.
@FredBarbarossa
@FredBarbarossa 11 ай бұрын
@@OlliHuttunen78 Thanks, I know at least on linux using ROCm, you can run pytorch. But not on windows yet as far as I know.
@Kaalkian
@Kaalkian 11 ай бұрын
@@OlliHuttunen78 this was awesome video!!! is there repo of pre-calculated models that can be viewed? what format are these files? if its *.ply we can use any viewer?
@spyro440
@spyro440 11 ай бұрын
This might become big...
@jaakkotahtela123
@jaakkotahtela123 11 ай бұрын
Tätä tekniikkaa voisi käyttää myös 3D-pelien toteutukseen. Saisi ihan upeita pelejä kehitettyä, vaikka tässä onkin paljon rajoitteita verrattuna polygoniperustaiseen toteutukseen. Esimerkiksi joku lentosimulaattori voisi toimia ihan hyvin tai autopeli. Saisi tosi realistisen näköisiä pelimaailmoja tehtyä, kun voi ensinnä vaikka Unreal Enginellä renderöidä tosi yksityiskohtaisen ja upeasti valaistun maailman, jonka muuntaa tuollaiseen pistemalliin
@narathipthisso4969
@narathipthisso4969 9 ай бұрын
Wow 😮
@TheShoes43
@TheShoes43 11 ай бұрын
thanks for doing this video.. I always assumed it would work in theory but was lazy and never created a scene to test it :). great stuff what about the refractions in the water? do those still hold up and change on view change?
@philipyeldhos
@philipyeldhos 10 ай бұрын
I think right now within the scope of this project, 360 videos could be looked at for training data. Much more information and should fill in the gaps nicely. In the future it should be possible to extract texture information from the 3d file and use it along with the point cloud information to skip the pre-rendering entirely.
@zenbauhaus1345
@zenbauhaus1345 10 ай бұрын
genius
@JfD_xUp
@JfD_xUp 6 ай бұрын
This try is great, I will follow your future tests, I quitted the computer graphics world, but still throwing an eye on new techniques. just one point : Nerf and 3D Gaussian Splatting are not exactly the same techniques (when saw D:\NeRF\gaussian-splatting)
@UcheOgbiti
@UcheOgbiti 11 ай бұрын
Metal FX, DLSS & FSR have proved that image upscaling is the future of real-time graphics. Can this same method be used to build a real-time engine that can be comparable to cycles, arnold, octane, etc
@NeoShameMan
@NeoShameMan 11 ай бұрын
Not really, it doesn't solve real time lighting, it's more like baking. So you will have to render your scene normally into a few shots and baked it into 3DGS. like he did.
@joseph.cotter
@joseph.cotter 11 ай бұрын
Interesting for potential future prospects in generating real time 3d under specific parameters, but currently you would get better results converting the file to a real time 3d engine like Unreal.
@unadalabs
@unadalabs 8 ай бұрын
Hey there, you have very effective way of doing it i would love to have the ready made scripts for this
@ericljungberg7046
@ericljungberg7046 11 ай бұрын
Do you think this would work if rendered in 360? Currently working on a project where I'm rendering out a bunch of very realistic 360 images for a restaurant so cooks and personnel can train how to use the space before it's built. The idea struck when watching this video that it would be cool to show the client the entire space in real time.
@henriidstrom
@henriidstrom 11 ай бұрын
Interesting topic! But it would have been even better if you had added some computer specifications. For example what GPU and how much VRAM does it have?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
GPU: Nvidia RTX 3070 8Gb Vram PC: Asus ROG Ryzen 7 64Gb ram
@JuXuS1
@JuXuS1 6 ай бұрын
great
@viniciusvmrx2845
@viniciusvmrx2845 10 ай бұрын
The experimentation phase is always amazing. But at the moment its faster to bring the modeling to Unreal Engine if we need hi quality in realtime.
@andereastjoe
@andereastjoe 11 ай бұрын
Wow FANTASTIC!!!. This is definitely what i am waiting for. I do architectural visualization and some interactive walkthru using UE5. My question is, is there anyway to view this in VR?
@Thats_Cool_Jack
@Thats_Cool_Jack 11 ай бұрын
At the moment you can import it into unity
@andereastjoe
@andereastjoe 11 ай бұрын
@@Thats_Cool_Jack cool. Thanks for the info
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yes. There is recently developed plugin for Unrel in UE markeplace. It is not free as the Unity plug in the github but with that you can probably make it work in Unreal with Unreals VR templates.
@andereastjoe
@andereastjoe 11 ай бұрын
@@OlliHuttunen78 ok cool. Thanks for the info
@rotors_taker_0h
@rotors_taker_0h 11 ай бұрын
I should be possible to produce point cloud without intermediary step of Colmap, the blender already has all that information and it is redundant to restore it with noisy process of rendering images and then back to 3d.
@BardCanning
@BardCanning 11 ай бұрын
Is there any reason why this wouldn't be used to make games run with prerendered raytraced scenes at a high frame rate?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Absolutely! I can't think of any reason why it couldn't.
@andreasmuller5630
@andreasmuller5630 11 ай бұрын
@@OlliHuttunen78 Its blurry, it´s not dynamic, it has a very big memory footprint, its not even clear to me that its faster than something similar done with realtime RT in Unreal.
@3DProgramming
@3DProgramming 11 ай бұрын
i guess some problems can arise also with collision detection, unless the scene is not cleared someway, I suppose lot of random points can be floating around
@BardCanning
@BardCanning 11 ай бұрын
Isn't collision usually a separate invisible polygon layer?@@3DProgramming
@3DProgramming
@3DProgramming 11 ай бұрын
@@BardCanning yes, you are right, yes maybe it can be manually specified, I was thinking more of some automatic way to derive it from the data, but I suppose it is possible with some automatic cleaning
@3DProgramming
@3DProgramming 11 ай бұрын
amazing, is it possible to train the data with a graphics card with just 6GB of VRAM?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yes! 6Gb can be enough up to a certain iteration point. And in the calculation, you only need to reach 7000 iterations to get a good enough Gaussian Splatting model. And even if the calculation crashes before 7000, there are tricks to succeed in the calculation, for example by reducing the resolution of the training images.
@Kaalkian
@Kaalkian 11 ай бұрын
@@OlliHuttunen78 i wonder if there is relationship to the iterations and vram. for llm 32b fits around 28gb models and 4b models can be run with at least 4gb. not sure if its related or mere coincidences
@3DProgramming
@3DProgramming 11 ай бұрын
@@OlliHuttunen78thank you! I definitely have to try it, tried nerf with nvidia ngp in the past and I was already amazed by the results, this looks even more incredible! Thank you for your kind answer :)
@shadowproductions969
@shadowproductions969 11 ай бұрын
60 fps looked like it was locked to that, vsync maybe? Pretty great tech, i have seen many getting 500-600 fps on photoreal looking 3dgs models. Truly the beginning of the future of 3d worlds l capturing
@EmanueleBlackBulbToscano
@EmanueleBlackBulbToscano 11 ай бұрын
Yeah V-synch. Disabling it i get 120-150 fps
@MDNQ-ud1ty
@MDNQ-ud1ty 10 ай бұрын
The better way wouldn't be going to 2D image space but compute the gaussians directly from the scene. It would be faster and more accurate. You convert the geometry directly to gaussians by sampling then do the training to reduce the number.
@technomaker777
@technomaker777 11 ай бұрын
Please make a tutorial how to do GS model! And about installing software and what hardware you need.
@niiranen
@niiranen 11 ай бұрын
Fascinating or as we say: todella mielenkiintoista. En ole GausSplattingiä tutkinu aiemmin ja mietin että sellainen voisi olla kova jos esim tuosta huoneesta saisi exportattua ajettavan tiedoston ja asiakas voisi avata sen omalla koneellaan ja pyöritellä kameraa ympäriinsä. Kuinka isoja tiedostoja noi sitten on mitä python laskee?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Tää on erittäin uutta tekniikkaa. Lähdekoodi julkaistiin tuossa syyskuun alussa 2023. Katseluohjelmia on hyvin vähän vielä. 3DGS malleista voi tulla aika isojakin: 800Mb - 1.3gb mutta näihin on jo kehitelty pakkausmenetelmää jolla datasetin koko saadaan merkittävästi pienemmäksi. Mielenkiintoista tutkia mihin tämä homma menee.
@alkeryn1700
@alkeryn1700 10 ай бұрын
i wonder if AI could use the point cloud so you can skip the training time.
@jad05
@jad05 11 ай бұрын
This, this is what i first thought of when i found out about 3d gausian splatting. now i wonder how long until we get this but animated?
@finnwilder3858
@finnwilder3858 5 ай бұрын
Olli your so freakin smart
@axelarnesson5066
@axelarnesson5066 11 ай бұрын
I genuinely think we will be moving away from polygon based rendering in the future and replace it with a combination of voxels and Gaussian splatting. These have huge benefits to polygons when it comes to immersive and augmented realities, which is where the world seems to be heading
@davidmcsween
@davidmcsween 11 ай бұрын
Back to spline modelling or sdfs then no more vert pushing 😅
@Jokker88
@Jokker88 11 ай бұрын
For static visualizations maybe, but for games with dynamic environments and dynamic lighting, this is not a viable option.
@EmanueleBlackBulbToscano
@EmanueleBlackBulbToscano 11 ай бұрын
3DGS is great but you basically have baked lighting and no mesh colliders. Basically useless in gaming projects but it can have huge suppport in AR, VR and artistics experiences
@NeoShameMan
@NeoShameMan 11 ай бұрын
Collision mesh and visual mesh are already different in games, also while baked, since it's using SH, we can add dynamic light to them easily, shadow, translucency and correct material light response might be harder.@@EmanueleBlackBulbToscano
@lukas7impreza
@lukas7impreza 11 ай бұрын
When we can import this to blender ?
@qbert4325
@qbert4325 11 ай бұрын
The 30000 iteration will definitely look like a render that can be viewed in real time
@idcrafter-cgi
@idcrafter-cgi 20 күн бұрын
if you use a cgi image then will it be basically a perfect splatt scene due to no flaws in the data set.
@dsamh
@dsamh 10 ай бұрын
like ... it's cool ... but what is it for.... Are we going to see movies shot in some future tech 3DGS cameras maybe? For maybe stronger ai's to reinterpolate actual hybrid 3DGS and vector format or "baking"?
My observations on Gaussian Splatting and 3D scanning
16:32
Olli Huttunen
Рет қаралды 27 М.
Road map of Gaussian Splatting possibilities
14:48
Olli Huttunen
Рет қаралды 10 М.
Lehanga 🤣 #comedy #funny
00:31
Micky Makeover
Рет қаралды 31 МЛН
Yum 😋 cotton candy 🍭
00:18
Nadir Show
Рет қаралды 7 МЛН
Fortunately, Ultraman protects me  #shorts #ultraman #ultramantiga #liveaction
00:10
天使救了路飞!#天使#小丑#路飞#家庭
00:35
家庭搞笑日记
Рет қаралды 91 МЛН
Making a pawn that can actually turn into a queen | Blender 4.0
12:43
3D scan large areas with Insta360 camera and Cupix Vista
14:30
Olli Huttunen
Рет қаралды 15 М.
3D Gaussian Splatting - Explained!
8:28
Creative Tech Digest
Рет қаралды 85 М.
I forced EVERYONE to use Linux
22:59
NetworkChuck
Рет қаралды 447 М.
Cleaning Gaussian Splatting models in Postshot
6:51
Olli Huttunen
Рет қаралды 885
Photogrammetry vs Gaussian Splatting for Virtual Reality
17:14
3D Gaussian Splatting: How's This The Future of 3D AI?
8:54
Five noteworthy 3D Gaussian Splatting things in 2024!
12:35
Olli Huttunen
Рет қаралды 8 М.
Lehanga 🤣 #comedy #funny
00:31
Micky Makeover
Рет қаралды 31 МЛН