Does training 3D Gaussian Splats Longer Make a Difference?

  Рет қаралды 24,491

The NeRF Guru

The NeRF Guru

Күн бұрын

Пікірлер: 126
@8bvg300
@8bvg300 Жыл бұрын
I notice a common trend in your videos... showing stuff at postage stamp sizes. Remember to drag the viewer window to something that people can see and appreciate for themselves if there is significant detail increases.
@thenerfguru
@thenerfguru Жыл бұрын
Yea. I get much better at this in the video I drop later today. I really do appreciate the feedback though.
@Moshugaani
@Moshugaani Жыл бұрын
How do the reflections work with gaussian splatting in your renders? I've seen videos where the reflections on surfaces like water is treated like it just a window and there's mirrored geometry beneath the surface. But here the reflections actually look like they are reflections on a surface!
@thenerfguru
@thenerfguru Жыл бұрын
It’s usually more like looking onto a mirror. This specific one is still mirror like geometry.
@electrochipvoidsoul1219
@electrochipvoidsoul1219 Жыл бұрын
When it comes to deep learning, the relationship is definitely more logarithmic (that is, needing an order of magnitude more training to get noticeably better results). That is to say, 7k vs. 70k might be a better comparison.
@thenerfguru
@thenerfguru Жыл бұрын
I think the point of diminishing returns hits much soon. At 70, it's going to be incredibly rough to find a lot of improvement. Not sure about this project, but in some the data can diverge after too much training.
@catfree
@catfree Жыл бұрын
I thought 3D Gaussian Splatting wasn't using any deep learning/AI?
@KyleCypher
@KyleCypher 8 ай бұрын
@@catfree There is AI being used for the training, But while viewing not AI is being used.
@catfree
@catfree 8 ай бұрын
@@KyleCypher Ok thanks for clarifying !
@DGFig
@DGFig Жыл бұрын
Hey, Jonathan! I noticed the same thing. 7 k iterations is already very clear. Maybe creating a checkpoint at 14 k iterations would be perfect, instead of 30 k.
@thenerfguru
@thenerfguru Жыл бұрын
Someone on Twitter who has been working on optimizing the project to run faster did a bunch of independent tests as well. He noticed at 15K, the quality improvements are very minimal. I agree, I will set it to run to 14-15k in the future.
@endrevarga5111
@endrevarga5111 Жыл бұрын
Idea! 1. Make a low-poly 3D scene in Blender. It's a 3D skeleton. Use colors as object IDs. 2. Using real-time fast OpenGL engine, quick-render some hundred images, placing the camera to different locations like photographing a real scene for the 3DGS creation. The distribution of the camera should be easy using Geometry Nodes. 3. Using these images, use Runway-ML or ControlNet etc. to re-skin them according to a prompt. If possible, use one image to ensure consistency. 4. Give the re-skinned images to the 3DGS creation process to create a 3DGS image for the scene. Et voilà, a 3D AI-generated virtual reality is converted to 3DGS.
@thenerfguru
@thenerfguru Жыл бұрын
Cool idea!
@mik0lvj
@mik0lvj 11 ай бұрын
Forest Scene looks crispy as hell
@thenerfguru
@thenerfguru 11 ай бұрын
Surprisingly captured with just my iPhone 13 Pro. No gimbal or anything with it.
@fpv_everyday
@fpv_everyday Жыл бұрын
'3D Gausian' is cool. Thanks for the cool intro. I also watched your beginners guide on it I stil have to try it out. Thanks for the nice video. Keep it up.
@AlphaSeagull
@AlphaSeagull Жыл бұрын
"This is something you'd commonly see on command prompts" Me eating pretzels never/rarely using comand prompts for anything: "Right of course"
@thenerfguru
@thenerfguru Жыл бұрын
😂
@wix001HD
@wix001HD Жыл бұрын
I still haven't figured out if it whould by possible to use any other methods to prepare the data for training. Like using agisoft metashape to align the cameras and create a point clody (.ply) which is used during the training. Colmap is extremly slow and not so accurate. Any thoughts?
@thenerfguru
@thenerfguru Жыл бұрын
Currently, COLMAP is the only option. I’ll have to explore.
@narendramall85
@narendramall85 Жыл бұрын
@wix001HD How much time it took for you, it took more than 2 hours for me on more than 250 images. I used 32GB vRAM GPU, 100GB RAM
@basspig
@basspig Жыл бұрын
Can these scenes be exported as 3D geometry with textures to blender?
@culpritdesign
@culpritdesign Жыл бұрын
Not yet, but that is mentioned in the white paper as an interesting possible feature
@mattizzle81
@mattizzle81 Жыл бұрын
But the rendering this way is so much nicer, why would you want to? This is primarily a rendering technique not a scanning technique.
@basspig
@basspig Жыл бұрын
@@mattizzle81I thought it was a 3D capture technology.
@mattizzle81
@mattizzle81 Жыл бұрын
@@basspig Not really not for geometry. Colmap already does that and is part of the pipeline as the first step, but Colmap has been around forever.
@PhilipNee
@PhilipNee 5 ай бұрын
kudos for all these great videos!
@jorisbonson386
@jorisbonson386 Жыл бұрын
Glad I'm not the only one whose desktop is a clusterfuck of icons
@manda3dprojects966
@manda3dprojects966 Жыл бұрын
I cannot believe that the AI can detect a smooth reflective surface, in the past when AI didn't existed, such thing is impossible because the only possible thing was point tracking without AI, and when AI comes, everything changes, even a reflective surface, ...
@thenerfguru
@thenerfguru Жыл бұрын
So true! Go down the implicit neural representation rabbit hole. It’s a bright future for 3D reconstruction on scenes with featureless surfaces.
@oskarwallin8715
@oskarwallin8715 3 ай бұрын
Wouldve been awesome to see an updated video on setting up a conda enviroment from a fresh windows install. (all dependencies). I've been running through your tutorial on setting it up, but i keep hitting alot of snags (such as cuda / Pytorch issues, Environment vars not working great (Colmap.exe needed to be moved from bin to lib etc, ffmpeg.exe needing a absolute reference to the exe to be picked up, cuda versions in yml files not existing in the right channels, submodule pip dependencies not working etc) following your tutorial doesnt work straight out of the box anymore.
@stefanveselinovic4777
@stefanveselinovic4777 Жыл бұрын
Can you explain what are the iterations doing? Could these splats be splatted onto a 3d mesh constructed from these points for faster rendering?
@thenerfguru
@thenerfguru Жыл бұрын
Iterations are basically incremental steps to improve the output data. The algorithm is further refining the model to attempt to match the source input images. There are diminishing returns though. After a number of iterations, improvement is practically undetectable.
@Iostal
@Iostal 11 ай бұрын
@@thenerfguru Is this independent of the scene size? My first dataset is of a large area with ~3000 high res photos taken from a video of a laneway, i'm halving the resolution and after 100k steps the result isn't recognizable, do you think the number of iterations required scales with the size of the dataset?
@whata7570
@whata7570 Жыл бұрын
I would say it would depend on your project, but I think 7k is good for me. On a side note, can this software use LiDAR e57 scan images to create the Gaussian Splats models?
@thenerfguru
@thenerfguru Жыл бұрын
e57 scan images? You would need source images and pose information. I guess you could use an e57 from a scanner in place of a sparse dataset. But not 100% sure
@oskarwallin8715
@oskarwallin8715 3 ай бұрын
@@thenerfguru im trying to look into this now as well.
@caleb5717
@caleb5717 Жыл бұрын
At 9:00 it looks like only one dash was used with iteration. Would that maybe make it default to 30k for that scene? Either way the detail this method creates is amazing.
@thenerfguru
@thenerfguru Жыл бұрын
I think you are right. On my social media accounts I have posted a few high quality render stills for comparison. It’s a great way to compare.
@吕康杰
@吕康杰 10 ай бұрын
Thanks for your guide on using 360 videos. The latest meshroom works with a small change in command. I use a tourism shooting for training. There were people walking in the video. The loss decreased to 0.03 very soon, but went back to 0.2 after many iterations Iter: 140,000 Loss: 0.025812 Iter: 3,129,000 Loss: 0.203647 Should I delete some input pictures with too little scenery or adjust some training parameters?
@HiHeat
@HiHeat Жыл бұрын
Hi! Can we somehow export these 3D data to professional software? Export point cloud to 3d software such as 3d max, cinema 4d and the like? (If possible, it would be great if you record a video tutorial about exporting)
@thenerfguru
@thenerfguru Жыл бұрын
No currently. Also, if you just want a dense point cloud you can get that with photogrammetry tools. This is creating splats which are like spheres that stretch and morph to fill the scene.
@Instant_Nerf
@Instant_Nerf Жыл бұрын
@@thenerfguruSo we need a new type of format file model to view, edit etc like we do with fbx, obj .. and software to support it… that can also be combined with other model formats
@coffeeeagle
@coffeeeagle Жыл бұрын
pretty sure you can do cube marching to get a mesh, correct? I know you could with other nerf tools
@matemarschalko4768
@matemarschalko4768 Жыл бұрын
@@Instant_Nerf Maybe 10 years from now, games and game engines won't be rendered the same way with polygons and textures ... it will be nerfs, light fields, gaussian splats or something else
@CharlesVanNoland
@CharlesVanNoland Жыл бұрын
Should be fullscreening the 3D render if you're going to put it on KZbin. I'm literally looking at a few inches on my screen to discern visual detail differences - and I'm watching this on a PC monitor. Imagine how little your rendering window looks for someone on a phone/tablet. Fullscreen capture or don't bother because nobody else will be able to see anything worth seeing otherwise. We're just taking your word for it!
@jag24x
@jag24x Жыл бұрын
Can you please do a video on how to use the video recording in SIBR. Perhaps also on a way to create a camera path in SIBR and then render the camera path. Thanks, keep the video comings! :)
@thenerfguru
@thenerfguru Жыл бұрын
The SIBR viewer is not great for animation flythroughs. Try using Nerfstudio. here is how: kzbin.info/www/bejne/d2Kqk6yZn5WVjdksi=Oo5BM5KKIDHJAsbn
@murcje
@murcje Жыл бұрын
Great video again! I wonder if the quality could increase even more when training with higher resolution pictures, since when training starts it wil default to resize to below 1.6k. Will try that
@thenerfguru
@thenerfguru Жыл бұрын
I do believe there are some diminishing returns. I would like to see the test. I will most likely give it a try. I don't think it's worth trying imagery above 4K though. You will be quite VRAM restricted with such high res input imagery. I would rather have more images from new viewpoints than higher resolution imagery.
@murcje
@murcje Жыл бұрын
@@thenerfguru I did a few tests and can confirm that in these cases it's not worth the exrta hrs of training in 4k instead of 1.6k. I did also notice a big difference in definition when going from 15k to 30k its. I have a couple of codepen sites to compare the results if you are interested. Thanks!
@Thomason1005
@Thomason1005 Жыл бұрын
hmm i wonder how long you can go without significant quality loss... 3000? 1000?
@thenerfguru
@thenerfguru Жыл бұрын
Do you mean how quickly do we approach diminishing returns?
@RogueBeatsARG
@RogueBeatsARG Жыл бұрын
Making maps will be so ez if this works with less ram
@realthing2158
@realthing2158 Жыл бұрын
Is there a way to convert the results to high res geometry with textures? I suppose one way would be to take screenshots from different angles and use those with photogrammetry to create a 3D model. Somebody could automate that process and make use of new AI techniques to improve the photogrammetry output. Could produce good results. Of course the best thing would be to render the nerfs in realtime directly, but I think we are some time off before that becomes mainstream, especially for animated objects. I need to use geometry for now in the project I'm working on.
@thenerfguru
@thenerfguru Жыл бұрын
Currently, there is not a great way to produce high quality textured meshes from 3D Gaussian Splats. The goal of this project is novel view synthesis. There has been follow on work to produce meshes, however, they are low poly and you would need to manually texture everything. Photogrammetry is still the SOTA method for textured meshes. Have you tried Luma AI’s mesh export?
@realthing2158
@realthing2158 Жыл бұрын
Thanks for the reply. Yes I have tried Luma AI's video to 3D feature. I got fairly good looking results for the nerf but the mesh was a bit too blotchy and lacking in details to be directly usable. Development is happening so fast now though, in a year we might be able to create anything in 3D using only a single image from Midjourney. :) And then if it all can be rendered in extreme detail using nerfs and Gaussian splatting it would be mindboggling. @@thenerfguru
@outlander234
@outlander234 11 ай бұрын
@@realthing2158 I wouldnt count on it. With developments like this the rate of improvement is high in the beginning but to get to that last 10-15% is massive task and 100% is probably impossible. Self driving cars are best example, sure they were capable for years now but to actually get them to be on par with humans and cover all the edge cases is still daunting task and nobody achieved it yet for a reason despite them claiming they would for years now.
@WhiteDragon103
@WhiteDragon103 Жыл бұрын
How exactly do the gaussian splats (which, are just stretched out blobs of solid color with blurry edges, similar to 3-D brush strokes) able to model viewpoint-dependent effects like reflections on curved surfaces?
@thenerfguru
@thenerfguru Жыл бұрын
Spherical harmonics. The splats are not uniformly colored.
@WhiteDragon103
@WhiteDragon103 Жыл бұрын
@@thenerfguru how many harmonics are used? I know SH can be used to efficiently represent radiance appropriate for diffuse lighting, but for reflections with sharp edges you'd need a ton of parameters per gaussian.
@jonnygrown22
@jonnygrown22 Жыл бұрын
Can you share the videos you used to train the data?
@thenerfguru
@thenerfguru Жыл бұрын
I can share the forest scene if you are interested! I’ll get around to it tonight and host it on my GitHub fork.
@jonnygrown22
@jonnygrown22 Жыл бұрын
@@thenerfguru thank you so much!
@jonnygrown22
@jonnygrown22 Жыл бұрын
Can you leave a comment replying to this once you've done that?
@Ranstone
@Ranstone Жыл бұрын
I understand Regan field rendering and I have no clue how it calculates the reflections...
@vmafarah9473
@vmafarah9473 Жыл бұрын
y r u thinking we are on a 100inch 4k tv , why cant u stretch the vport window ?
@qshiqshi2958
@qshiqshi2958 Жыл бұрын
Wouldn't one be able to output this in VR?
@thenerfguru
@thenerfguru Жыл бұрын
Yes! I have a tutorial for how to get this in Unity. From there, you can integrate with VR.
@visualstoryteller6158
@visualstoryteller6158 Жыл бұрын
Does transparent surface make a difference? Like the car reflection is good bt i dont knw about basic normal glass or sub surface type material.
@thenerfguru
@thenerfguru Жыл бұрын
I haven’t tested many transparent surface scenes. I can try!
@MrGTAmodsgerman
@MrGTAmodsgerman Жыл бұрын
Does it still need the RTX series of graphic card to run that like NeRFs? And what is the minimum VRAM?
@thenerfguru
@thenerfguru Жыл бұрын
I don’t think so. You can run it on an A6000. What do you have? Speed of training usually is dependent on the number of CUDA cores
@MrGTAmodsgerman
@MrGTAmodsgerman Жыл бұрын
@@thenerfguru I have a GTX 1080Ti 11GB 😂
@NEOnuts
@NEOnuts Жыл бұрын
Thank you for the tutorials and info, would love to know a little more about how are you capturing data, my tests with splatting always get smoke(ghosts) after i train it. Keep the videos up, also im trying to hack a way to export a camera path from blender.
@thenerfguru
@thenerfguru Жыл бұрын
Please let me know if you figure out the blender hack. I will also be making a video on how to view this with the Nerfstudio viewer and create animations.
@thenerfguru
@thenerfguru Жыл бұрын
I also need a video on capturing images. It all comes down to camera movement and consistency in lighting.
@narendramall85
@narendramall85 Жыл бұрын
@@thenerfguru please release that video on capturing images/video
@murcje
@murcje Жыл бұрын
I believe Luma ai has a thing for blender camera to nerfstudio and back, tried it once but couldn't get it working, 99.99% sure that's just my lack of blender skills
@mihalydozsa2254
@mihalydozsa2254 Жыл бұрын
Interesting that in the Ferrari scene when you go higher than the plane of the images it does not know what to show. I does not remember something like that with NERFs, at least when I tried if it could reconstruct it from the side it knew from the top. I guess because it does not know what would be the reflection. I just happened to not try with something like that.
@thenerfguru
@thenerfguru Жыл бұрын
Ideally, you would have more angles. By chance, I just got 3 image sets of a race car today. We’ll see how it looks!
@luke.perkin.inventor
@luke.perkin.inventor Жыл бұрын
Great video but you have to go full screen when showing the comparison.
@thenerfguru
@thenerfguru Жыл бұрын
Yea, oops!
@ALexalex-ss4sb
@ALexalex-ss4sb Жыл бұрын
im sorry but i dont understand what happens in this video, are u AI generating 3d scenes that good? i tought AI wasnt that advanced yet
@thenerfguru
@thenerfguru Жыл бұрын
These are not AI generated. These are generated from a set of input photos. Then, a scene is recreated volumetrically.
@kozyboiiii1341
@kozyboiiii1341 5 ай бұрын
i've been seeing on PSNR and SSIM on this 3DGS, how can i know on that metric?
@panonesia
@panonesia 10 ай бұрын
is there any method when we only have 8gb vram to process large dataset?, i have 3060ti and maximum number image only 60-70 image (3000x2000 pixel), maybe slow down speed train or split precess? i always have some error message when process 30k iterations (7k iterations success some times), looking for advice
@gaussiansplatsss
@gaussiansplatsss 5 ай бұрын
what 3d gaussian splatting did you use?
@Because_Reasons
@Because_Reasons Жыл бұрын
I can't seem to rotate in fps view ever. My mouse does not respond, and in trackbnall mode rotations are odd.
@brianbell3028
@brianbell3028 Жыл бұрын
ijkluo do rotations. It's really weird. (Also, in case you didn't know q and e also move the camera, in addition to wasd)
@sgproductions6336
@sgproductions6336 Жыл бұрын
Can u open that in Unreal Engine?
@thenerfguru
@thenerfguru Жыл бұрын
I’ve seen someone share a proof of concept but no code.
@meateaw
@meateaw Жыл бұрын
watching you grab the top of the window and resize the movement handle off the top *every time* was kind of frustrating :)
@thenerfguru
@thenerfguru Жыл бұрын
Yea. Now I just launch everything in fullscreen.
@morglod
@morglod Жыл бұрын
Bruh why you shrink viewport to 10x10 pixels It's impossible to see difference on KZbin
@adriandmontero5780
@adriandmontero5780 Жыл бұрын
Hi Buddy, you know if is possible export the project or the model to Unreal Engine?, thanks for your tutorials and for share your knowledge
@thenerfguru
@thenerfguru Жыл бұрын
I've seen proofs of concept, not official public projects though. My next video will be this in Unity.
@jorbedo
@jorbedo 7 ай бұрын
It is possible to run it on a linux or over the cloud transfering photos/video to a H100 80Gb GPU?
@thenerfguru
@thenerfguru 7 ай бұрын
Today the best way to train gaussian splats is with Nerfstudio. They include instructions for Linux setup: docs.nerf.studio/quickstart/installation.html
@pixxelpusher
@pixxelpusher Жыл бұрын
How would you view these in VR? Does the viewer allow that?
@jamesriley5057
@jamesriley5057 Жыл бұрын
I'm new here, so I'm having trouble with context. You're taking drone footage and creating a textured mesh using your own code? Can we import the mesh into blender yet?
@trickster721
@trickster721 11 ай бұрын
It's not a textured mesh, it's a new method for rendering photogrammetry point clouds directly, like one massive 3D texture.
@jamesriley5057
@jamesriley5057 11 ай бұрын
@@trickster721 ok. I'm interested. A 3D texture is the equivalent to a UV unwrapped 3D model. When this tech hits the 3D modeling world, where I live, it's going to be huge
@trickster721
@trickster721 11 ай бұрын
@@jamesriley5057 Not a UV mapped texture, an actual 3D image, like a JPEG with a 3rd dimension. Similar to voxels.
@gridvid
@gridvid Жыл бұрын
Is it possible to model, texture and light then render using this tech?
@thenerfguru
@thenerfguru Жыл бұрын
No. Not with this project. What you are looking at is gaussian splats - no your typical triangle based geometry. The splats come with their own baked in textures. Perhaps with some future development, this would be possible. Especially lighting.
@gridvid
@gridvid Жыл бұрын
@@thenerfguru but you could build photorealistic worlds using blender and use those cgi rendered images to generate realtime scenes with this tech... or not?
@thenerfguru
@thenerfguru Жыл бұрын
@@gridvid I think it is headed that way. Both NeRF and 3D Gaussian Splatting show promises for this.
@wasdfg662
@wasdfg662 Жыл бұрын
is it possible to export Gaussian splats as geometry? like obj or fbx files?
@thenerfguru
@thenerfguru Жыл бұрын
Not yet. Soon I bet. It’s technically possible.
@BenEncounters
@BenEncounters Жыл бұрын
Can this be hosted in WebGL?
@cannotwest
@cannotwest Жыл бұрын
Does 3d gaussian splatting support objects movement/deformations?
@trickster721
@trickster721 11 ай бұрын
It's basically just a method for creating a 3D photograph from many photographs, so you could conceivably make a 3D video instead using many crowdsourced video angles of a sporting event, for example. It would take an entire crypto farm of GPUs to run, but it's possible.
@jordivallverdu2568
@jordivallverdu2568 Жыл бұрын
can we extract a mesh or pcd out of this ?
@thenerfguru
@thenerfguru Жыл бұрын
Possible. Not with this project though. I suggest checking out this project: leonidk.com/fmb-plus/
@MistereXMachina
@MistereXMachina Жыл бұрын
I'm new to this, and have a 1080ti...is it too weak to do this?
@thenerfguru
@thenerfguru Жыл бұрын
Yes, I don't think the viewer will work.
@mrksdsgn
@mrksdsgn Жыл бұрын
How big is the output file for each?
@thenerfguru
@thenerfguru Жыл бұрын
Easily 1 gb or more. Sometimes a bit smaller.
@Instant_Nerf
@Instant_Nerf Жыл бұрын
Why do people, not look great at all. But objects stationary look amazing
@thenerfguru
@thenerfguru Жыл бұрын
People don't stand still nearly as well as you think.
@Instant_Nerf
@Instant_Nerf Жыл бұрын
@@thenerfguru ohh believe you me .. I tried it on me. It didn’t work well
@Danuxsy
@Danuxsy Жыл бұрын
wait what? but I just took a bunch of photos of myself naked so I could have sex with my digital younger self in the future and you're telling me that ain't going to work? what a bummer!! 😡
@samhodge847
@samhodge847 Жыл бұрын
You know the PSNR is what you should look at rather than an arbitrary glance
@thenerfguru
@thenerfguru Жыл бұрын
In my opinion, yes and no. PSNR is a quantitative approach that in itself is not perfect. Plus, this is to help people decide if the difference is enough for them for extra training. If I told almost anyone PSNR is 25 for 7k iters and 29 for 30k but provided no visuals, they couldn’t tell you visually what the difference would be.
@RektemRectums
@RektemRectums Жыл бұрын
It's difficult to listen to these videos when the guy uptalks like his self-esteem is so low his crippling depression kicks in as soon as he's off camera.
My observations on Gaussian Splatting and 3D scanning
16:32
Olli Huttunen
Рет қаралды 32 М.
3D GAUSSIAN SPLATTING with POSTSHOT (and Point Cloud Editing)
28:12
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 50 МЛН
Good teacher wows kids with practical examples #shorts
00:32
I migliori trucchetti di Fabiosa
Рет қаралды 7 МЛН
How Strong is Tin Foil? 💪
00:26
Preston
Рет қаралды 150 МЛН
Стойкость Фёдора поразила всех!
00:58
МИНУС БАЛЛ
Рет қаралды 5 МЛН
What is Better? Polycam Gaussian Splats vs Original Gaussian Splats
16:54
Generate 3D from ANY Video! │Gaussian Splatting Tutorial w/ Postshot
18:50
How Ian Hubert Hacked VFX (and you can too!)
22:26
InLightVFX
Рет қаралды 248 М.
How to Use 360 Video for 3D Gaussian Splatting (and NeRFs!)
16:55
The NeRF Guru
Рет қаралды 25 М.
Gaussian Splatting explorations
32:45
DataScienceCastnet
Рет қаралды 26 М.
You don't understand AI until you watch this
37:22
AI Search
Рет қаралды 638 М.
New Cinematic AI Tools and Tricks You Need To Know!
10:49
Theoretically Media
Рет қаралды 21 М.
3D Gaussian Splatting - Explained!
8:28
Creative Tech Digest
Рет қаралды 90 М.
Polycam released Gaussian Splatting feature!
10:32
Olli Huttunen
Рет қаралды 21 М.
إخفاء الطعام سرًا تحت الطاولة للتناول لاحقًا 😏🍽️
00:28
حرف إبداعية للمنزل في 5 دقائق
Рет қаралды 50 МЛН