Gaussian Splatting explorations

  Рет қаралды 27,153

DataScienceCastnet

DataScienceCastnet

Күн бұрын

Пікірлер: 59
@basiliotornado
@basiliotornado Жыл бұрын
I never would've thought style transfer possible like that! Nice video!
@ajamoffatt
@ajamoffatt Жыл бұрын
Jonathan thank you for the great walk through. The video structure is excellent. It is also really cool work you’re doing to stylize the base results!!
@Wilqposnb
@Wilqposnb Жыл бұрын
i like this guy's personality
@camtrik3686
@camtrik3686 Жыл бұрын
Very nice video, thank you! Can you also share the notebook code you use in the video?
@dyllanusher1379
@dyllanusher1379 Жыл бұрын
I love the pace, the comedic timing, the content and animations!
@Mr_NeRF
@Mr_NeRF Жыл бұрын
Very nice video with very good explanation of the key concepts! Especially the sh are nicely explained.
@datasciencecastnet
@datasciencecastnet Жыл бұрын
Ha, guess who forgot to add the visual aids during the intro..... hand gestures will have to suffice ;) Paper and code show up from 6 minutes in.
@grantbaxter554
@grantbaxter554 Жыл бұрын
Very cool, very well explained, thank you
@georhodiumgeo9827
@georhodiumgeo9827 Жыл бұрын
Bro your ChatGPT history looks a lot like mine, just tons of insane neerdy stuff. How fast is the retraining process? Like could you retrain a scene during gameplay with a good gpu? Especially if you had some trained gaussians already in place. I've been wanting to make a project like this for a while but I can't figure out a way to do it with mesh objects. What I want to do is have an AI NPC on level 1 start talking to you adout something, anything really and let the player lead the conversation. Lets say you mention clouds then it starts googling clouds and starts training based on that as a prompt. By the time you get to level 2 its ready and level 2 is based on some structure but everything has morphed into clouds. That would be nuts when you realize you are just manifesting your own game play with what you are talking with the NPC about. I was thinking about having some primative sphere for most objects then using picture to depth field from stable diffusion help reshape the mesh then wrap a new texture on it. I think it could work on a high end computer but this method seems like it would be almost a dreamy hasey way to do it. If you could buy some time in cut scenes and dialog interactions it might be possible. Anyway this whole process for rendering is just nuts. Trying to wrap my brain around it.
@monstercameron
@monstercameron Жыл бұрын
This is crazy interesting but no one is talking about it
@rmt3589
@rmt3589 Жыл бұрын
People just started talking about it. This is the second video on it I've watched today.
@hdslave
@hdslave Жыл бұрын
All github videos are like that. No one cares until it's integrated into some kind of consumer app. Ai stuff was the same way. Gan videos got no plays and no one was talking about ai until dalle and midjourny came out
@Instant_Nerf
@Instant_Nerf Жыл бұрын
We are busy working with it
@nicholassmit6875
@nicholassmit6875 Жыл бұрын
One month later, my feed is literally filled with it 😁
@呂皓恩
@呂皓恩 11 ай бұрын
It is starting to blow up
@MikeTheAnomaly
@MikeTheAnomaly Жыл бұрын
Wonderful explanation! Thank you!
@lion87563
@lion87563 11 ай бұрын
Guy with worst image quality ever explains technique that produces best image quality ever. Just some geek joke, thank you for such a nice presentation
@WhiteDragon103
@WhiteDragon103 Жыл бұрын
Do the gaussians model view direction dependent effects? Or are they best suited to only represent lambertian materials? E.g. can specular highlights, refractions, non-planar reflections, fresnel effects, etc. be modelled using these? If so, how? As far as my understanding goes, these gaussians are basically single-colored blobs, similar to billboards (as used by particle effects for games).
@datasciencecastnet
@datasciencecastnet Жыл бұрын
They model directional effects using something called spherical harmonics, where the color of a gaussian depends on the viewing angle. It isn't perfect, but it let's you get the appearance of reflections and shine.
@WhiteDragon103
@WhiteDragon103 Жыл бұрын
@@datasciencecastnet I see. SH is efficient for representing lighting as used by diffuse (e.g. lambertian) surfaces, as you only need 4 parameters per color channel. However, I've seen their technique produce sharp reflections (the red sports car has a sharp reflection on the hood). For this to be possible using vanilla SH, you'd need an impractical number of parameters per gaussian. Are they perhaps passing the output color returned by the SH calculation through a clamp or sigmoid, so that sharp edges can form when using very high magnitude SH coefficients?
@datasciencecastnet
@datasciencecastnet Жыл бұрын
@@WhiteDragon103 they use 3rd degree SH (so 16 coefficients in total), as far as I know no extra clamping or activation functions.
@itsm0saan
@itsm0saan Жыл бұрын
thanks for the video ❤! As a suggestion, Would like to see something about peft (LoRa, Quantization …..)
@er-wl9sy
@er-wl9sy Жыл бұрын
Thanks. Could you go over the CUDA code as well
@BradleySmith1985
@BradleySmith1985 Жыл бұрын
this will be the new live AR option.
@adityakompella9203
@adityakompella9203 Жыл бұрын
Would it be possible to post the notebook that you walk through in the video?
@naninano8813
@naninano8813 Жыл бұрын
well spherical harmonics (thanks for the explanation btw) are cool and all but they are not adequate way of modeling light caustics. like if you have something refractive in your scene, like a half full glass of water, I wonder if splats can represent how objects behind it would warp. and re-lighting a splat scene (adding a light source, post capture) is tricky IMO
@HaiweiShi
@HaiweiShi 6 ай бұрын
Hi, really good video. Could you share the jupyter notebook you showed in this video? It would be so grateful!
@pouljensen2789
@pouljensen2789 Жыл бұрын
Can splats emit light (instead of just reflecting)? If not, how difficult would that be to implement? I'd like to try modelling aurora, which would correspond to fully transparent splats emitting light.
@DiegoBarreiroClemente
@DiegoBarreiroClemente Жыл бұрын
Great video and explanation Jonathan! I really enjoyed your approach with CLIP to be able to modify gaussians given a text prompt. Do you have a link to the jupyter notebook you used in the video?
@MattCruikshank
@MattCruikshank Жыл бұрын
Can you say the final size of the scene, especially compared to the size of the input images and the number of iterations? Would it be viable to embed the scene representation into a video game, for instance, or are they enormous? (Or would we need to use something like Wavelets to represent the scene, and transform back into these harmonic values before rendering?)
@Nik-dz1yc
@Nik-dz1yc Жыл бұрын
I'm curious though; Weren't there differential renderers with spherical harmonics already a thing since like 2008 ?
@quyet-65cs3buivan8
@quyet-65cs3buivan8 7 ай бұрын
very interesting, thank you! Can you also share the notebook code of your on this video?
@Ali-wf9ef
@Ali-wf9ef 10 ай бұрын
It would've been cool if you could visualize which point in the scene you are showing the spherical harmonics for
@ParinithaRamesh-qf2ig
@ParinithaRamesh-qf2ig 6 ай бұрын
Can you please share the git repo for all your code, it would be great to follow along and see the results on my end.
@aintgonhappen
@aintgonhappen Жыл бұрын
the description is missing the link to the paper website 😥
@specyfickRC
@specyfickRC 10 ай бұрын
Can you share this code you have made for this video ?
@gridvid
@gridvid Жыл бұрын
Can you also model, texture, light and animate than render with this tech?
@Dan-gs3kg
@Dan-gs3kg Жыл бұрын
It's a voxel representation, the notion of texture gets difficult to talk about since there are no texels. For lighting, the spherical harmonics could be of interest as that implements anisotropy, allowing for reflections. The issue is how to do that efficiently, at base you can make the underlying exist in linear color space by which you can apply highlights and shading on top of that with traditional methods. For animation, you need to define animations in terms of point clouds, instead of textured models.
@gridvid
@gridvid Жыл бұрын
@@Dan-gs3kg but you could create fictional, photorealistic worlds in blender, render those cgi scenes with circles from different perspectives and use those images to create realtime representations with this tech... or not? Lightning is certainly interesting 🤔 Each Gaussian Slat would need to have information about its roughness I think
@DamonHenson
@DamonHenson Жыл бұрын
They're not voxels - they're an arbitrary amount of objects in a given space at continuous positions. Voxels are defined along more of a discrete grid.@@Dan-gs3kg
@optus231
@optus231 5 ай бұрын
Where is this? "GS website (with links to paper):"
@mort_brain
@mort_brain Жыл бұрын
So, but it doesn't use any poligonal meshes so you can't use it with other techniques, or can you?
@datasciencecastnet
@datasciencecastnet Жыл бұрын
No meshes or triangles here! There is some work being done on how to extract meshes from these gaussian representations but it's tricky to do that well.
@pajeetsingh
@pajeetsingh Жыл бұрын
High Fidelity? Fidelity?
@dl569
@dl569 Жыл бұрын
thanks, very clear!
@Mitobu1
@Mitobu1 Жыл бұрын
I wonder if this would work with a laplacian distribution rather than a gaussian 🤔
@laurenpinschannels
@laurenpinschannels Жыл бұрын
what's your reasoning for why to do that?
@Mitobu1
@Mitobu1 Жыл бұрын
@@laurenpinschannels well laplacian distributions are like really narrow gaussians with long skinny tails, and is a powerful tool used in source localization. It is also prevalent in how neurons organize themselves to support sparse coding, which can be applied in pruning the neural network.
@yuvish00
@yuvish00 9 ай бұрын
Can you share the code please?
@philtoa334
@philtoa334 Жыл бұрын
Very Good.
@李应健
@李应健 Жыл бұрын
nice
@ScientiaFilms
@ScientiaFilms Жыл бұрын
Is there a way to continue training from a previously unfinished training?
@flowerflower1154
@flowerflower1154 Жыл бұрын
The quality of your facecam looks like being in 2002
@Nekzuris
@Nekzuris Жыл бұрын
The video looks like it's from before 2010
@Autovetus
@Autovetus Жыл бұрын
So prince Harry is now an IT nerd ?? 🤔
@728GT
@728GT Жыл бұрын
😂
@ouroborostechnologies696
@ouroborostechnologies696 Жыл бұрын
🙄
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 159 М.
BAYGUYSTAN | 1 СЕРИЯ | bayGUYS
36:55
bayGUYS
Рет қаралды 1,9 МЛН
It's Time For Gaussian Splatting // Tutorial
20:14
Default Cube
Рет қаралды 137 М.
3D Gaussian Splatting [Paper Review]
33:00
Jak-Zee
Рет қаралды 1,6 М.
What is Better? Polycam Gaussian Splats vs Original Gaussian Splats
16:54
Why Does Diffusion Work Better than Auto-Regression?
20:18
Algorithmic Simplicity
Рет қаралды 423 М.
Unlocking the Mystery of Sparse Point Clouds in Gaussian Splatting
18:01
ChatGPT is terrifying. This is why.
25:40
GothamChess
Рет қаралды 32 М.
Spin Gravity Compared
12:12
The Overview Effekt
Рет қаралды 1,2 МЛН
My observations on Gaussian Splatting and 3D scanning
16:32
Olli Huttunen
Рет қаралды 41 М.