Gaussian Splatting! The next big thing in 3D!

  Рет қаралды 249,491

Olli Huttunen

Olli Huttunen

Күн бұрын

Пікірлер: 374
@IRWBRW964
@IRWBRW964 Жыл бұрын
3D Gaussian Splatting is actually not a NeRF technology as there is no neural network, but the splats are directly optimized through rasterization rather than the ray tracing like method of NeRFs.
@pdjinne65
@pdjinne65 Жыл бұрын
Looks like it's a new way to display point clouds, am I wrong? Still amazing and I have to try it!
@JB-fh1bb
@JB-fh1bb Жыл бұрын
@@pdjinne65Right? I thought this gaussian splatting technique was a new way to present the point data generated by NeRF
@malfattio2894
@malfattio2894 Жыл бұрын
Wow, it looks really damn good considering
@Blox117
@Blox117 Жыл бұрын
so it will be faster too
@WWG1-WGA
@WWG1-WGA Жыл бұрын
That means we can play even more with the neurons
@jimj2683
@jimj2683 Жыл бұрын
Imagine Google Street view built with this. It could then be used in a GTA type game with the entire world.
@ariwirahadi8838
@ariwirahadi8838 Жыл бұрын
you forget about flight simulator..it is generated by real map
@michaking3734
@michaking3734 Жыл бұрын
i bet in the next 20-30 years
@florianschmoldt8659
@florianschmoldt8659 Жыл бұрын
There is no good way use splatting with interactive light and shadow or animation. All the lighting is fixed together with the color information. So I guess, this tech won't make it into gaming.
@strawberriesandcum
@strawberriesandcum Жыл бұрын
@valcaron and most of it is missing
@p5rsona
@p5rsona Жыл бұрын
imagine in vr, live crowds, ai npc, able to go in any building.
@filipewnunes
@filipewnunes Жыл бұрын
I spent lots and lots of hours in my life unwraping UVs and correcting meshes to use in my archviz projects. The amount of development in this field is insane. And we are in the first days of this. What a time to be alive.
@OlliHuttunen78
@OlliHuttunen78 Жыл бұрын
My thoughts exactly. Many things are changing very fast now. Although this does not yet create anything from scratch and for these Nerf you still need something existing, from which things are transformed into 3D by taking the pictures from real world. Traditional modeling still certainly has its place in when creating something new.
@captainflimflam
@captainflimflam Жыл бұрын
I got that reference! 😉
@loleq2137
@loleq2137 Жыл бұрын
Ah, a fellow Scholar!
@MagicPlants
@MagicPlants Жыл бұрын
well said!
@nekosan01
@nekosan01 Жыл бұрын
Photogrammetry is very old, do not why you only know this marketing stuff and enjoying if its much worse than realitycapture and other app, and they do not require expensive videocard, also you can import in sculpt software for fixing mesh and project uv very easy, than this garbage
@crs11becausecrs10wastaken
@crs11becausecrs10wastaken Жыл бұрын
If scanning software is actually capturing and rendering details as fine as leaves of plants, without all of the artifacts, then that is absolutely mind-blowing.
@JimmyNuisance
@JimmyNuisance Жыл бұрын
I fell in love with splat engines when I spent time in Dreams on the PSVR. It's fantastic for creatives, it makes it very easy to make new and unseen surfaces.
@kazioo2
@kazioo2 Жыл бұрын
That renderer went through so many changes and iterations (also after they made public explanations) I'm not really sure how much of typical splatting is still used there. There are many conflicting informations about it.
@4Gehe2
@4Gehe2 Жыл бұрын
Ok I did a quick reading of the paper. This is a clever thing however what should be kept in mind is that it doesn't preserve details so much as makes them up. (The explaning bit is in the chapters 5.1, 5.2, and fig. 2, 3, 4 of the paper). Basically you reconstruct the environment not by analysis but by statistically informed guesses. After which you then do analysis on whether the guess was too big or small. Then you refrence the solution to the original data to see how close you were with your guesses. If the guess was too small you duplicate it near the point; if the guess was too great you divide it in to two. Meaning that if you need to estimate a curve, instead of trying actually solve the curve you keep guessing the shape of the curve, but because of the process of duplication and division of the guesses you basically approach faster to the solution. However it is important to keep in mind that you don't actually get THE SOLUTION, you get approximation of the solution based of guesses. Basically this is the way you can do square roots and cube roots in your head to 2-3 decimals, by estimating upper and lower and iterating that (for those that don't know: if you want to estimate square root of 6, you can in your had calculate that 2x2 is 4 3x3 is 9, so the solution is between those; then you can do 2,5x2,5 you get 6,25 which is more, so you know that solution needs to be less than that so 2,25x2,25 you get 5,0625... so on and so forth. You will never practically get the solution of 2,2449489743 because we only go to 3 decimals but lets be honest 0,04% error is more than enough. To simplify a bit: Imagine you are sculpting with clay and want to replicate a shape. You can only add or remove instead of shaping it with your hands. If you have too much material you cut away hald of the amount of you know to be too much. If you added too little clay, you add another same sized lump. And you keep repeating this untily you get close enough approximation of the thing you are replicating. What is important to keep in mind is the limitations of this. You can't replicate things accurately for the simple reason that if you lack information on details you can't just guess them! Your data resolution doesn't increase. You only actually know the datapoints that you gathered. So for historical, scientific or engineering purposes you have will not be able to get any extra information (And I hope that people realise this, before they try to use details from this in a court of law or something), you really can't know anything more from this than you can get from just looking at the frames as pictures.
@linuxmill
@linuxmill Жыл бұрын
guassian splatting has been around for many years. I used it in the late 90's. It's a method of generating implicit functions, which can then be contoured.
@MatteoMi
@MatteoMi Жыл бұрын
I'm not a specialist, but I suppose this is similar to VR, that's also been around from the 80s, but the tech wasn't mature enough. I mean, maybe.
@EmileChill
@EmileChill Жыл бұрын
@linuxmill I used autodesk 123d catch which isn't avalible anymore, i believe it was the same kind of technique but not 100% sure.
@danielalorbi
@danielalorbi Жыл бұрын
Yup, the new thing here is using it to render radiance fields in real time
@EmileChill
@EmileChill Жыл бұрын
@@danielalorbi That's incredible!
@stephanedubedat5538
@stephanedubedat5538 Жыл бұрын
while the technique is not new, its application to NERF is
@TorQueMoD
@TorQueMoD Жыл бұрын
Great video! The RTX 3070 has 8GB of Vram though, not 4. I'm super excited to see where NeRF will take us in another 5 years! It's a boon for indie developers who don't have the time or budget to create high quality assets.
@stash.
@stash. Жыл бұрын
it varies, i have the 6gb 3070 model _===edit====_ Turns out i had the 8gb version not the 6gb as i mentioned earlier
@GrandHighGamer
@GrandHighGamer Жыл бұрын
@@stash. 4GB would be incredibly low still (and 8GB is already pitiful for a card that cost around $800), to the point where it wouldn't make sense to exist at all. At that point a 3060 would both be cheaper and potentially have 4x the memory. I'd imagine this was just a mistake.
@esaedvik
@esaedvik Жыл бұрын
@@GrandHighGamer 8GB is perfectly fine for the use cases of 1080-1440p gaming.
@patjackmanesq
@patjackmanesq Жыл бұрын
2.7k subs is a ridiculously low amount for such quality videos! Great work, brother
@thenerfguru
@thenerfguru Жыл бұрын
Thanks for the shout out! You can now view the scene in the NerfStudio viewer which unlocked smooth animation renders.
@OlliHuttunen78
@OlliHuttunen78 Жыл бұрын
Yes. I just noticed your new video about it. Have to try it. Thanks Jonathan!
@Ironside451
@Ironside451 Жыл бұрын
Reminds me of that moment on Star Trek into Darkness when they are looking at security footage and are able to move around the footage just like this
@TheCebulon
@TheCebulon Жыл бұрын
All the time, I thought I saw videos and was wondering about 3D. 🤣 Then it hit me: These ARE 3D renders. Absolutely stunning.
@jamesleetrigg
@jamesleetrigg Жыл бұрын
If you watch two minute papers, there’s a new radiance, field, technique that is over 10 times as fast and better quality so look forward to seeing this in VR/AR
@Barnaclebeard
@Barnaclebeard Жыл бұрын
Can't stand to watch TMP anymore. It's nothing but paid content and catchphrases. I sure would love a channel like the old TMP.
@primenumberbuster404
@primenumberbuster404 Жыл бұрын
​@@Barnaclebeard fr 😢 Many of those papers are actually not even peer reviewed.
@Barnaclebeard
@Barnaclebeard Жыл бұрын
@@primenumberbuster404 And it's exceedingly rare that there is any analysis or insight beyond, "imagine what it can do two papers down the road!" anymore.
@Summanis
@Summanis Жыл бұрын
Both this video and the TMP one are on the same paper.
@wozniakowski1217
@wozniakowski1217 Жыл бұрын
I feel like those galaxy-like ellipses with feathered edges are THE new polygons and soon this rendering method will replace them, especially in the gaming industry. What a time to be alive
@pdjinne65
@pdjinne65 Жыл бұрын
That depends. Can they support animation, rigging, fluids, etc? Voxels are great but they still aren't the norm... Maybe it's just another great tool in the shelf.
@bricaaron3978
@bricaaron3978 Жыл бұрын
I would say the gaming industry would be the last area to use this method. It looks like this is a method of rendering; it has nothing to do with the generation and manipulation of 3D data.
@pdjinne65
@pdjinne65 Жыл бұрын
​@@bricaaron3978 true... I would love to see a game made wth NERF point clouds rendered with this though.
@bricaaron3978
@bricaaron3978 Жыл бұрын
@@pdjinne65 Can you, in a few sentences, explain why NERF point clouds are different from any other point cloud so that I don't have to research it, lol?
@pdjinne65
@pdjinne65 Жыл бұрын
@@bricaaron3978 NERF is an algorithm that generates 3d models (or point clouds) from 2d photos, using neural nets. Pretty amazing stuff, but quite complex and not yet widely used. This technique seems to be just a way to display the results in a nice way, if I understand correctly. In theory one could make game environments using photos+NERF as an input and this to render them, pretty sure it'd look amazing
@8eck
@8eck Жыл бұрын
I remember when i first tried Nerf. Since then, they have evolved into insane quality!
@Legomanshorts-c5o
@Legomanshorts-c5o Жыл бұрын
thanks for linking to the nerf guru. could come in handy some day if i decide to try this!
@ChronoWrinkle
@ChronoWrinkle Жыл бұрын
Hot damn, it should be possible to extract depth , normals, and glossines from such capture, this is insane!
@stash.
@stash. Жыл бұрын
bringing old family photos will be a huge market boom
@o0oo888oo0o
@o0oo888oo0o Жыл бұрын
Great, best videos about this niche of nerf's etc. i found so far. Keep it up!
@vadimkozlov3228
@vadimkozlov3228 10 ай бұрын
fantastic and very professional youtube channel. appreciate your work
@damsen978
@damsen978 Жыл бұрын
This is literally what will follow photographs and images in general where you can see captured moments of your family and friends in full 3D. Now we need a device that would capture these automatically with a click of a button.
@Oho_o
@Oho_o Жыл бұрын
Those gaussian splats looks like galaxies in space at 2:07 .. ;O
@LaVerite-Gaming
@LaVerite-Gaming Жыл бұрын
It's beautfiul that the first image I ever saw rendered in this way now is a Captain Haddock figurine ❤
@pan6593
@pan6593 Жыл бұрын
Great summary, insight and practical example - thanks!
@MonsterJuiced
@MonsterJuiced Жыл бұрын
This is fascinating! I hope there's going to be some kind of support for blender/ unreal/ unity soon I would love to play with this
@Jackpadgett-gh8ht
@Jackpadgett-gh8ht Жыл бұрын
there is support for it! volinga AI, search it up
@romanograsnick
@romanograsnick Жыл бұрын
Astonishing achievements were made, that is great! I hope this may lead to set builders to make more models which can be traced and recreated in 3d space, keeping these sculpting jobs relevant. Thanks!
@HandleBar3D
@HandleBar3D Жыл бұрын
This is gonna be huge in real estate, once it’s a streamlined app on both ends.
@renko9067
@renko9067 Жыл бұрын
This is basically how the actual visual field works. Overlays of sensations, sounds, and smells complete the illusion of subject/object. It is the zero dimension quantum wave field. The scene ‘moves’ in relation to the ‘eyes’ of an apparent subject.
@Eddygeek18
@Eddygeek18 Жыл бұрын
Next step is getting it working with animations and physics and you have a new game rendering method. I have always felt mesh rendering is limited, been waiting for a new method such as this. Hope it's the one this time since there have been quite a few duds in the past
@0ooTheMAXXoo0
@0ooTheMAXXoo0 Жыл бұрын
Apparently Dreams (2020) on PS4 uses this technique.
@Tattlebot
@Tattlebot Жыл бұрын
Games consistently refuse to use new technologies, because teams don't have faith in leadership, and don't have the skills. Games are getting less featureful and interactive. Talented writers are negligible. The result is an oversupply of unsophisticated chew toys. No incentive to upgrade from 5700 XT type cards.
@catsnorkel
@catsnorkel Жыл бұрын
Until this method can produce poly models that can properly fit into a pipeline, I really don't see this being widely used in either the games or film industries, but I can see it being used a lot in archvis for example.
@Eddygeek18
@Eddygeek18 Жыл бұрын
@@catsnorkel i know what you mean gpus are designed for polygons and engines have very specific mechanisms for it, but i don't think it would take too much modify existing software to make use of GPU effeciently for this technology. They both use techniques hardware is capable of so if invested in i don't think it would take Unity or Unreal much more time to integrate the tech into their engines compared with poly based rendering pipelines. Since it uses a scattering field type rendering it shouldn't be much different
@catsnorkel
@catsnorkel Жыл бұрын
@@Eddygeek18Thing is, this technique does not support dynamic lighting, and isn't even built in a way that could be modified to support it. Same with animation, surfacing, interractivity etc. It is a really cool idea to render directly from pointcloud data like this, skipping most of the render pipeline, however the parts that are skipped over is **where the game happens**
@The-Filter
@The-Filter Жыл бұрын
Man, thank you for this video! That stuff is really next gen! wow! And top notch moderation! Very relaxing and informative!
@BlenderDaily
@BlenderDaily Жыл бұрын
so exciting! thanks for the explanation:)
@DailyFrankPeter
@DailyFrankPeter Жыл бұрын
All we need now is a scanner in every phone for taking those selfie pointclouds and we'll be in the world of tomorrow.
@fontenbleau
@fontenbleau Жыл бұрын
Technology from Minority Report movie, showed 20 years ago, that's how long it takes to make.
@chosenideahandle
@chosenideahandle Жыл бұрын
Terve Olli! Another Finn with an awesome KZbin channel (I'm not including myself 😁)! Thanks for keeping us up-to-date on what is going on with this cutting edge stuff.
@MommysGoodPuppy
@MommysGoodPuppy Жыл бұрын
Yesss i cant wait for this to be utilized in vr, I assume we could render absolutely insane detail in realtime for simulating reality or having big budget cgi movie visuals in games
@jimmyf2618
@jimmyf2618 Жыл бұрын
This reminds me of the the old "Unlimited Detail" video promising infinite rendering
@TheABSRDST
@TheABSRDST Жыл бұрын
I'm convinced that this is how our vision works irl
@marco1941
@marco1941 Жыл бұрын
Wow, now we’ll see really interesting development in video game production and of course in the results.
@talis1063
@talis1063 Жыл бұрын
I'm deeply uncomfortable with how fast everything is moving right now. Feels like anything you touch could become obsolete in months.
@flameofthephoenix8395
@flameofthephoenix8395 Жыл бұрын
Except for farming.
@DisemboweII
@DisemboweII Жыл бұрын
@@flameofthephoenix8395 Or water. Or oxygen. Or physical existence.
@flameofthephoenix8395
@flameofthephoenix8395 Жыл бұрын
@@DisemboweII I figured he was talking about careers.
@Sc0pee
@Sc0pee Жыл бұрын
If you mean traditional 3D-modelling for gaming/movies or 3D-printing then no at least not for the foreseeable future, because this technique doesn't produce mesh models, which is a requirement in games and movies for dynamical lightning, animation, surfacing, interactivity etc. And it also requires you to have the object you want in real life to work with.
@luketimothy
@luketimothy Жыл бұрын
Just imagine a machine that can generate point clouds around itself at a rate of 60 per second, and a technique like this that can render that point cloud at the same 60 per second rate. Truly 3D video. Would be amazing.
@endrevarga5111
@endrevarga5111 Жыл бұрын
Idea! 1. Make a low-poly 3D scene in Blender. It's a 3D skeleton. Use colors as object IDs. 2. Using real-time fast OpenGL engine, quick-render some hundred images, placing the camera to different locations like photographing a real scene for the 3DGS creation. The distribution of the camera should be easy using Geometry Nodes. 3. Using these images, use Runway-ML or ControlNet etc. to re-skin them according to a prompt. If possible, use one image to ensure consistency. 4. Give the re-skinned images to the 3DGS creation process to create a 3DGS image for the scene. Et voilà, a 3D AI-generated virtual reality is converted to 3DGS.
@MaxSMoke777
@MaxSMoke777 Жыл бұрын
It's a cute way to make use of point clouds. I'm certain it'll be handy for MRI's and CT scans, but it's nowhere near as useful as an actual 3D model. You couldn't use it for video game models or 3D printing. It could be extremely useful for real-time, point-cloud, video conferencing, since it's so fast.
@catsnorkel
@catsnorkel Жыл бұрын
agreed. it will probably find a few niche use cases for certain effects that are layered on top of a traditional poly-based render pipeline, but it's not going to completely take over, probably ever. This is a technology developed for visualisation, and not really suitable for games or film.
@NecroViolator
@NecroViolator Жыл бұрын
I remember a Australian company making infinite graphics with something similar. They made games and other stuff. Cant remember the name but it was many years ago. :(
@Datdus92
@Datdus92 Жыл бұрын
You could walk in your memories with VR!
@3dvolution
@3dvolution Жыл бұрын
It's getting better and better, that's an impressive method, thanks for sharing ;)
@michaelvicente5365
@michaelvicente5365 Жыл бұрын
ohhh thanks for explaining, I saw a couple things on twitter and was wondering what this gaussian splatting was about!
@Neura1net
@Neura1net Жыл бұрын
Very cool. Thank you
@EBDeveloper
@EBDeveloper Жыл бұрын
Glad I found your channel ;) .. nice to meet you Olli
@MotMovie
@MotMovie Жыл бұрын
Good stuff mate. Very interesting indeed and great to see such in depth look into things with self made examples. As a sidenote, music is a bit big for this, I mean it´s not cure for cancer (just yet) so perhaps go bit easier on "Life will win again, there will be beautiful tomorrow" soundtrack :p . Anyhow, cheers, will be back for more.
@domovoi_0
@domovoi_0 Жыл бұрын
Incredible. Love and blessings!
@GraveUypo
@GraveUypo Жыл бұрын
these are so good that you can probably use screenshots of these models to make 3d models with old photogrametry software.
@Inception1338
@Inception1338 Жыл бұрын
One more time for gauss to show the world who is the king of Mathematics.
@lordofthe6string
@lordofthe6string Жыл бұрын
This is so freaking cool, I hope one day I can make a game using this tech.
@afti03
@afti03 Жыл бұрын
Fascinating! could you make a video on what would be the most relevant use cases for this type of technology?
@Dartheomus
@Dartheomus Жыл бұрын
My mom walked in the room and asked what the hell I was doing. I told her to just relax. I'm gaussian splatting.
@metatechnocrat
@metatechnocrat Жыл бұрын
Well one thing it'll be useful for is helping me examine images for clues to hunt down replicants.
@MarinusMakesStuff
@MarinusMakesStuff Жыл бұрын
Awesome!!! Though, for me, all that matters is getting a correct mesh and I couldn't care less about textures personally. I hope the mesh generation will soon also make leaps like this :)
@joonglegamer9898
@joonglegamer9898 Жыл бұрын
Yeah you're spot on, this is not new, there might be new elements to it which is great, but I won't bat an eye until they come up with a perfect, easy to seam - seamless uv-mapping model, we still have to make our models animateable, relying on low poly to get the most of the CPU / GPU powers in any setup, so yeah untill then we can keep dreaming, hasn't happened in 40+ years.
@i2c_jason
@i2c_jason Жыл бұрын
It seems like there is a divergence in 3D modeling as AI comes online... the artistic 3D formats with no geometrical accuracy seem to be leading, but when will we get 100% geometrically correct AI output, such as STEP files? STEP files are extremely complex to create and parse, so will this be 5-10 years out before we get such a thing as an AI output?
@tristanjohn
@tristanjohn Жыл бұрын
Absolutley phenomenal!
@EveBatStudios
@EveBatStudios Жыл бұрын
I really hope this gets picked up and adopted quickly by companies that are training 3-D generation on nerfs. The biggest issue I’m seeing is resolution. I imagine this is what they were talking about coming in the next update with imagine 3D. Fingers crossed that would be insane.
@costiqueR
@costiqueR Жыл бұрын
I tell you this: is a game changer for the industry...
@catsnorkel
@catsnorkel Жыл бұрын
depends on the industry though. Archvis yes, absolutely. Games and film, it will only really have a minor impact since it isn't really geared towards those use cases.
@perspectivex
@perspectivex Жыл бұрын
Although there are undoubtedly many people using this for assets in games or for walk throughs or whatever, and that's great for them, but there's some non-small fraction of people seeing this and thinking they could make 3d printable models with this technique with fewer photos than plain old photogrammetry. So far I've never seen a clean mesh coming from NeRF technique unless it's also from as many photos as you'd need for photogrammetry. For example, a friend just was using lumalabs NeRF app to model a sculpture and it had him take something like 70 photos while moving around the object, telling him where to shoot from and if the photo was bad. He was very disappointed since it was nothing like the hype that makes you think you can make models (actual models, the mesh, not just a visual) just sort of waving your phone around an object while it records video. Which is all to say...this video should show the underlying resultant meshes without the texture mapping on top so people aren't fooled into thinking the mesh will look anywhere near as good as the visuals seen here. Or, maybe I'm wrong, and the meshes of the models made in this video are highly detailed and accurate and made from just a few quick passes around the object while taking video...but I doubt it.
@OlliHuttunen78
@OlliHuttunen78 Жыл бұрын
Hi! Well these NeRF models are volume models at first place and this Gaussian Splatting thing is just another visual trick to make them look even more accurate in 3D. But they should not be confused to 3D mesh models. Although Radiance field is possible convert to surface model form, but it is not same anymore after conversion. It will loose all these transparent and reflective qualities after conversion and mesh models made out from it looks often very poor and bad quality. These are two different techniques to represent 3D models. NeRF are not surface models that you can 3D print. NeRF's are volume models.
@perspectivex
@perspectivex Жыл бұрын
@@OlliHuttunen78 Thanks for the reply. I think that's not that clear in almost every NeRF-related video I've seen and it'd be nice if people made more of an effort to dispel any idea that it's a new, easy way to make high quality meshes from real-world objects. Simply flashing a quick view of the mesh would probably be enough.
@OlliHuttunen78
@OlliHuttunen78 Жыл бұрын
@@perspectivex Yes. That is true. I have handled this topic little bit on my video where I compare Luma AI and 3Dpresso services kzbin.info/www/bejne/g6SkaYhnd9uNgMk But this should be pointed out more clearly. I think this is a good subject that perhaps I could handle on my upcoming videos. Meawhile if you are looking for a good video to 3D application I recommend to check out the 3Dpresso service on web. It makes the NeRF conversion to good quality 3D mesh model. Best that I have seen so far. But it is still on Beta stage and build of a model doesn't succeed every time. But when it does it makes very good quality that after few minor tweaks in Blender it can be 3D printed.
@taureanwooley
@taureanwooley Жыл бұрын
Perforated disck layering at one point with bezier curve translations and HDR data mining ...
@Felenari
@Felenari Жыл бұрын
Good watch. Subscribe earned. Haddock is one of my faves.
@eekseye666
@eekseye666 Жыл бұрын
Oh I love your content! Should have been subscribed last time I met your channel. I didn't, but I do it now! )
@GfcgamerOrgon
@GfcgamerOrgon Жыл бұрын
I can see stable diffusion going into that direction someday. I rember that games that utilized cubemaps, there is already a virtual enterprise that use a form of point clouds instead of poligons. It should be wonderful to train such data into tree dimensional stable diffusion models.
@Savigo.
@Savigo. Жыл бұрын
Cubemaps are literally planar images projected into a cube.
@GfcgamerOrgon
@GfcgamerOrgon Жыл бұрын
@@Savigo. Yes! Its where all started! Like lunacy/secrets of da vinci(2006) and others, now with that, instead of ordinary cubemaps we can overlap those images in a very realistic static light games without the loss performance with raytracing, and even begin to train AI on these, so we can create the interactive images to tell interactibe histories!
@tonygardner4077
@tonygardner4077 Жыл бұрын
liked and subscribed ... hi from New Zealand
@icegiant1000
@icegiant1000 Жыл бұрын
How long before micro drones are just buzzing up and down our bike paths, sidewalks, streets and so on, grabbing HQ images, beaming them to the cloud, and by the end of the day you can do a virtual walkthrough of the local fair, or the car dealership, or a garage sale on the other side of town, or the crowd at a football game. Only thing stopping us is CPU power and storage, and that is getting solved fast. Exciting times! P.S.- How long before people stay home, and just send out their micro drones, and view everything in VR at home. A lot safer than getting mugged.
@striangle
@striangle Жыл бұрын
absolutely amazing technology! super excited to see where the future takes us. thanks for sharing! ..side question - what is the music track on this video?
@GeekyGami
@GeekyGami Жыл бұрын
This point cloud technology is much older than 2020. It has been tried for a decade at this point, on and off.
@XRCADIA
@XRCADIA Жыл бұрын
Great video man, thanks for sharing
@Moshugaani
@Moshugaani Жыл бұрын
I wonder if the high demand of VRAM could be circumvented by using some other memory to compensate, like normal RAM or a part of your SSD?
@foxy2348
@foxy2348 Жыл бұрын
amazing. How is this rendered? In what program?
@joelmulder
@joelmulder Жыл бұрын
Once video games and 3D software rendering engines start to use this… Oh boy, that’s gonna be something else
@eeveelilith
@eeveelilith Жыл бұрын
I hope this comment is seen by many, but: Neural radiance fields with five dimensions would have both a temporal(like a movie) and multiversial(interactive) dimension. In other words, Neural Radiance Fields are the most likely candidate for interactive simulations of reality, and perhaps foreshadow the underlying mechanisms of the reality this comment was typed in.
@wolfzert
@wolfzert Жыл бұрын
Woow, que bien, un punto más para seguir andando en la IA
@MilesBellas
@MilesBellas Жыл бұрын
The entire VFX Industry is under massive disruptive growth that now prioritizes INDIVIDUAS..... ....a huge paradigm shift.
@mankit.mp4
@mankit.mp4 Жыл бұрын
Hi Olli, great video and thanks for the intro to such fascinating tech. What’s your opinion on whether Insta360 or full frame camera a fisheye lens will provide a better result or workflow?
@OlliHuttunen78
@OlliHuttunen78 Жыл бұрын
Well. In the process where Colmap is used to generate the pointcluoud it doesn't like any kind of fisheyes lenses or round distortions on the images. Best way to train the model is use source images where all distortion has been removed. I'm not sure how the Luma AI's new Interactive Scenes are handling the material. It seems that it can take all sort of wide angle videos or 360 footage in. I recommend to try: lumalabs.ai/interactive-scenes
@yurygaltykhin6271
@yurygaltykhin6271 Жыл бұрын
I am pretty much sure that this (or a similar) tech in conjunction with the neural engines' development will finally lead to the creation of fully immersible and high-definition virtual worlds that are very cheap to produce. This is a mixed blessing to me because, in the near future, it will be impossible to distinguish artificial media from legitimate "real" images and videos. My bet is that soon we will see a legislative trend to compulsory disclosure of the origin of an image or a video when publishing, the first for the mass media and later on for any publications, including social media for the general public. Nevertheless, in a few years from now, I expect to see new video games that will make the games made with Unreal Engine 5 look as unrealistic as idk, Doom 2.
@imsethtwo
@imsethtwo Жыл бұрын
solution to the floating artifacts would be just make procedural volumetric fog and use it to your advantage 😎
@Zebred2001
@Zebred2001 Жыл бұрын
This is simulated "3-D" dependent on movement for the effect not actual 3-D which imparts depth perception to the viewer.
@MartinNebelong
@MartinNebelong Жыл бұрын
Great overview and certainly exciting times! 😊
@GuywithThoughts
@GuywithThoughts Жыл бұрын
Perhaps it can't be used this way, but I'm really hoping for similar technology to dramatically improve the speed accuracy of camera tracks. Would be amazing to just need a video recorded on the phone and get back the 3D camera track data and a point cloud of the environment.
@Misthema
@Misthema Жыл бұрын
Unlimited Detail did this before it was cool. It also did not require high-end GPUs nor that much power; it worked fast with software rendering!
@georg240p
@georg240p Жыл бұрын
Wasnt the euclideon thing just regular point clouds? Cant capture any view dependent effects like reflections.
@Misthema
@Misthema Жыл бұрын
@@georg240p prolly yeah. I just meant that this tech existed way before it was in any way adapted.
@Jakeuh
@Jakeuh Жыл бұрын
This will be able to be processed in real time at some point in the future. Just soak that in and rethink about. Apple visions “memory” playback
@4n0nym0u5
@4n0nym0u5 Жыл бұрын
I doubt this will replace 3d modeling anytime soon. There are artefacts which need to be cleaned manually, also you loose control over parts of the model when animating. It's just a hollow model that looks ultra realistic, nothing more.
@FastRomanianGypsies
@FastRomanianGypsies Жыл бұрын
How is this different from applying 3d gaussian blur at each point? If that's what's happening why not combine it with deconvolution to bring out sharper features after doing the blur?
@Lumaa_Lex
@Lumaa_Lex Жыл бұрын
I saw the pond from Saint-Petersburg Botanical Gardern! Or it was really shockingly accurate 3d representation of this pond? =)
@ziomalZparafii
@ziomalZparafii Жыл бұрын
Closer and closer to Esper from Blade Runner.
@abitw210
@abitw210 Жыл бұрын
trying to keep up with technology is really difficult these days. If i would knew what will happen with job of 2d/3d Graphic i would just go for another job category and keep it as a hobby
@GauravSharma-gt2gp
@GauravSharma-gt2gp Жыл бұрын
If this is so amazing, then why did 360 videos failed to gain popularity?
@liliangimenez4461
@liliangimenez4461 Жыл бұрын
How big are the files used to render the scene? Could this be used as a light field video format?
@soscilogical1904
@soscilogical1904 Жыл бұрын
what's the file size of similar quality Nerf vs 3D files? Do scenes load faster to Vram?
@mantis1412
@mantis1412 Жыл бұрын
Great if you want to represent a 3D space in near photo realism. I don't think this would be very good if you need to move objects, lighting and other effects around in the 3D space as with a game.
@DavidKohout
@DavidKohout Жыл бұрын
This really makes me feel like living in the future.
@DavidKohout
@DavidKohout Жыл бұрын
This just confirms that we're living in the best times, from the start of the phone technology to this.
@ukyo6195
@ukyo6195 Жыл бұрын
Looks like we have found Kim Jong-un secret yt canal
@punktachtneun9743
@punktachtneun9743 Жыл бұрын
how did you achieve the out of focus effect in the tintin figure?
@chukukaogude5894
@chukukaogude5894 Жыл бұрын
Now we put this with vr or xr oe whatever they call it and boom. We can recreate old images and animate them.
@lolmao500
@lolmao500 Жыл бұрын
Next gen of graphic cards apparently will all have a neural network chip on there.
@IndyStry
@IndyStry Жыл бұрын
This is awesome, is there a way to export this to an estimated actual polygonal model to use in 3d softwares?
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 140 М.
This new type of illusion is really hard to make
17:58
Steve Mould
Рет қаралды 639 М.
didn't manage to catch the ball #tiktok
00:19
Анастасия Тарасова
Рет қаралды 31 МЛН
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 77 МЛН
ТИПИЧНОЕ ПОВЕДЕНИЕ МАМЫ
00:21
SIDELNIKOVVV
Рет қаралды 1,3 МЛН
How Filmmakers Make Cameras Disappear | Mirrors in Movies
13:05
Paul E.T.
Рет қаралды 5 МЛН
Visualizing 4D Pt.1
22:56
HyperCubist Math
Рет қаралды 843 М.
Why Are Guillotine Blades Angled? (tested)
18:40
Know Art
Рет қаралды 640 М.
Unlocking the Mystery of Sparse Point Clouds in Gaussian Splatting
18:01
Top Next Gen AI 3D Model Generators: Tested for 3D Printing
39:33
3D Revolution
Рет қаралды 76 М.
Photogrammetry vs Gaussian Splatting for Virtual Reality
17:14
The 7 Tiers of Video Cameras
18:12
SBN3
Рет қаралды 71 М.
What 3D Gaussian Splatting is not?
8:21
Olli Huttunen
Рет қаралды 92 М.
How do QR codes work? (I built one myself to find out)
35:13
Veritasium
Рет қаралды 6 МЛН
didn't manage to catch the ball #tiktok
00:19
Анастасия Тарасова
Рет қаралды 31 МЛН