Unlocking the Mystery of Sparse Point Clouds in Gaussian Splatting

  Рет қаралды 5,221

Olli Huttunen

Olli Huttunen

Күн бұрын

Пікірлер: 45
@Embuer
@Embuer Ай бұрын
I have been following your channel since the very beginning, but I have never commented before. I must say that your videos are always very well-made and informative. They are always very interesting to watch, and I am glad to have found someone who shares my interest in Gaussian splatting models. I always create models with my drone everywhere I go 😂 .
@aivaraslig
@aivaraslig Ай бұрын
Have to agree - great content 👌
@360_SA
@360_SA Ай бұрын
Wow, this is incredible as a short film or even a professional course on Gaussian Splatting! I’m also delighted by your voice, which adds so much enjoyment to your video. This could easily be turned into a documentary on how to get the best results from Gaussian Splatting.
@rafall1118
@rafall1118 Ай бұрын
I'm more of a photogrammetry guy than GS but the issue you've had here was that the SFM algorithm used in Postshot just isn't as good as in commercial software - it doesn't align images as well and this results in holes absent of data. I myself use Metashape for aligning my photosets which I later use in GS, from my test I've found it to have more "workroom" in situations like yours, compared to Reality Capture, and will mostly align components without the 60% overlap rule. Nonetheless I'd recommend to have this overlap in different camera/device image sets, at least at the start and end of a set of photos. What also helps quality is taking photos instead of video, you could probably set your cameras to fire at 0.5s intervals. From what I know 360 cameras can output 2 fisheye "circles" as raw images for each of the lenses, those can be used at the alignment step without correcting their deformations as most PG software will do that themselves and having it done before just increases errors. Unfortunately photogrammetry doesn't really like combining images taken with different parameters but if you have their correct exif data it should nonetheless get things done. Raw images have another plus of having more data packed in compared to extracted frames from a video, in places where you have hard shadows for example you could recover quite a lot of hidden details, whereas in a video frame you'll just get featureless gray mush. Better images at the end result in better reconstructions, limiting the amount of outlier points and empty patches in point clouds. As for the amount of points I can say there's a visible uplift in quality at the beginning of training, although your analogy with the mesh of points holding the splats "in" (limiting floaters) was quite good. That's why we want consistency in point coverage more than the density itself - coming back again to the quality of our dataset. Hope my random thoughts helped a bit :>
@mateuszm9457
@mateuszm9457 Ай бұрын
I will definitely test what I have learned. Great video, thank you.
@plan8214
@plan8214 Ай бұрын
Hi, thanks for that! Your videos are always so informative and never shy from detailing errors, which is a great source of learning. Thanks. I find that after reality capture, which produces quite a noisy pointcloud (significantly noisier than it displays in its own viewport btw), it needs to be cleaned in cloud compare software (open source). But specifically, I use the reduce points (subsampling) option and select the “octree” method. This has the effect of averaging out the sparse cloud so that there are fewer heavier or lighter areas of coverage, which in Postshot, helps with a more even splat. In my basic experiments , it appears to me that actually, quite a light pointcloud is all that is needed to hold the splats together quite well. Also, use the CC to clean out any noise using the noise option which helps greatly reduce floaters. Thanks again!
@OlliHuttunen78
@OlliHuttunen78 Ай бұрын
Interesting! That is good info. I need to take a closer look at that octree method. Thanks!
@DavePorkins
@DavePorkins Ай бұрын
Thank you for doing so much testing and showing us your results! ❤ I'm running on a mobile 3050, so every trial takes too long to risk the errors that come with experimentation :S
@flablo
@flablo Ай бұрын
Great! Keep up the good work
@robmulally
@robmulally 18 күн бұрын
So If it has issues I've been experimenting what I do is export the video to images via say media encoder then I manually look at the problem area for blurry photos etc when exporting I won't export every frame say every 10th but if area doesn't look good I might export more frames there. I delete anything that's flared and overall look for anything that could ruin the generation then I'm drag the images into a folder and use that in posts shot and rather than best I select all. Also I might edit the images and add sharpening to them or boost shadows etc. Say bulk on lightroom. But mainly just get rid of non clean sharp shots. This usually works wonders less is more in cloud formation and this method let's me add more images or Remove images from problem areas. You don't have to be meticulous. But yeah ihad similar results in reality capture and use it when I need to.
@orxion
@orxion Ай бұрын
its a great idea and thx for the "manual" how to combine :) .. reality capture is pretty good, its only shame it is not supporting gaussian splats .. but who knows :)
@2.718_
@2.718_ Ай бұрын
Here's a summary of the key points from the video with timestamps: ****0:08**** - Introduction to 3D scanning and Gaussian splatting models, emphasizing the importance of understanding the role of the sparse point cloud in the process. ****0:50**** - Introduction of the location: a ruined church, chosen for 3D scanning using both ground-based and aerial methods. ****1:39**** - Ground scanning process described, including use of a scanning rod for different heights and interior scanning. ****2:03**** - Drone scanning process described, with challenges faced due to weather and lighting conditions affecting the 3D model's accuracy. ****4:02**** - Start of the 3D model creation process using the software "Postart," detailing the steps and default settings used. ****5:12**** - Explanation of the image processing and camera tracking steps in the Postart software, highlighting the time-consuming nature of these tasks. ****6:01**** - Discussion of the Gaussian splatting training and the importance of the sparse point cloud in determining the model's accuracy, especially in areas with differing light conditions. ****7:22**** - Analysis of the model's shortcomings, particularly in areas with less dense point clouds, and the decision to improve the model by increasing iteration steps. ****9:04**** - Explanation of the sparse point cloud's role as a "gauge" for placing Splats in the model and how its density affects model accuracy. ****10:00**** - Introduction of "Reality Capture" software as a method to improve the sparse point cloud density, despite its steep learning curve. ****11:01**** - Challenges and process of merging different scan components within Reality Capture to create a more accurate point cloud. ****14:02**** - Reiteration of the faster progress in Gaussian splatting training when using the improved sparse point cloud from Reality Capture. ****16:15**** - Conclusion comparing the time and effort between using Colmap and Reality Capture methods, noting that Reality Capture yields a better result despite being more labor-intensive. ****17:03**** - Mention of ongoing research and new methods being developed to improve point cloud production, with references to additional resources. ****17:45**** - Closing remarks, encouraging viewers to like, subscribe, and stay tuned for future experiments.
@wearefromserbia9714
@wearefromserbia9714 Ай бұрын
Awesome video! I wish Postshot supported AMD gpus
@shawnjo8917
@shawnjo8917 Ай бұрын
I'm wondering if the camera attached to the drone in this video is the same type of camera used on the ground. Additionally, I'm curious if Postshot can accurately process images from different cameras that have varying fields of view and resolutions.
@OlliHuttunen78
@OlliHuttunen78 Ай бұрын
All the cameras in this project were different and FOV was of course different in each of them. Total of 4 cameras was used (three in scanner stick and then the drone) and this was also one purpose of this video to demonstrate that the Postshot is able to handle footage from different cameras as well as Reality capture. All source images was resized to same 1440x1080 resolution which I have found to be a suitable size for Gaussian Splatting training.
@shawnjo8917
@shawnjo8917 Ай бұрын
@@OlliHuttunen78 So, in Postshot, is it possible to align images with different FOVs (Field of View) in a single pass, similar to RealityCapture? When I tested it, I found that only image groups with the same FOV produced results.
@OlliHuttunen78
@OlliHuttunen78 Ай бұрын
@@shawnjo8917 Yes! Now it is possible. This feature for different FOVs has been there I believe since version 0.3.294
@danialsoozani
@danialsoozani Ай бұрын
I use metashape and didn't try reality capture. have you compared them? the depth map calculation in metashape is very nice (instead of creating dense point clouds).
@jhonsonmartinez7473
@jhonsonmartinez7473 Ай бұрын
THX
@ezearo
@ezearo Ай бұрын
What flavour are those pink tictacs?
@buroachenbach703
@buroachenbach703 Ай бұрын
Grade video, You put a lot of effort into the storytelling in the graphics this time. I’m really anxious to try this myself now, since I have quite a few models in reality capture already and I want to see how they turn out in post chart. Great work once again. Kai
@cafier
@cafier Ай бұрын
Thank you Olli! We are learning so much from you! Question: Have you figured out how to export the camera position CSV from METASHAPE?
@salehbaker9221
@salehbaker9221 Ай бұрын
Sometimes, the RC algorithm is a little restrictive when it comes to picture alignment and will break the scan into different components.i usually switch to agisoft metashape software before I go deep into RC ground control .Agisoft metashape can get better alignment results in large area scans ,if not I stick to RC for speed processing
@AzadBalabanian
@AzadBalabanian Ай бұрын
Nicely done. Some feedback on the capturing and RC steps. Setting control points (CPs) suck, and they’re sort of a last resort. In order for the images to align automatically, I highly recommend when taking off with the drone, to start capturing at eye/head height first, capturing the same perspectives as your 360 cameras. In fact, you should do this at 1 or two more locations in the scan as it will ensure the accuracy. Second, you can get an even MORE dense point cloud by doing a mesh reconstruction in RC after aligning the cameras and exporting that as a PLY. That way, you’ll have a ton of points in the areas that your alignment point cloud had sparse points. RC is very finicky and has a huge learning curve, but for me as a photogrammetry professional, having my scans and splats aligned, scaled, and in the same coordinate space is really useful for combining the two. Keep at it!
@OlliHuttunen78
@OlliHuttunen78 Ай бұрын
Thanks. Good tips. The dens point cloud is interesting. I haven't been able to export Dens Point Cloud in correct format that Postshot would understand it. And I'm not sure would Denser Point Cloud solve all accuracy issues in Gaussian Splatting after all because Gaussian algorithm is primarily designed to work with sparse point clouds. Dense point clouds can be huge in file size and it might choke the training process. But it definitely would be interesting to try what actually would happen. Ideal situation would be that if there could be a way to paint and mark parts of the model where I would like to have more accuracy and more points. I need to practice using Reality Capture more.
@AzadBalabanian
@AzadBalabanian Ай бұрын
​@@OlliHuttunen78 I was able to export meshes (typically simplified to 1-2mil plys) as PLY's and successfully use them with the default 3DGS trainer, haven't yet tried it with PostShot--will need to look into this. The nice part about it was that the points were way more accurate than the alignment feature point clouds so the gaussian splat ended up with way less floaters.
@stephaneagullo3d
@stephaneagullo3d 14 күн бұрын
Hi Olli! One trick used in photogrammetry is to capture the image during grey weather. It's never easy to process areas that are too dark, so you'll have fewer shading problems, and it's very practical for relighting! You can eventually lighten the shadows in your photos before RC, then put them back in to create the Gaussian, but it's better to shoot at the same time, over several days with the same weather. I find that calculating Postshot directly produces slightly fewer floating splats, but this can be cleaned up. It's still an interesting gateway, because the alignment will be more precise, using your drone's GPS, and it will have a better scale. Be careful, though, to increase the number of points detected on each photo before import, and don't hesitate to redo the alignment several times, as it will improve it each time! The presence of a few targets on the terrain is useful for control points. Xgrid scanners are also very interesting, with Lidar interpreted in Gaussian! Glad to see all these evolutions! Nice continuation to you! Thanks a lot!
@OlliHuttunen78
@OlliHuttunen78 14 күн бұрын
Great tips. Yes. Those Lixel scanners from XGRID would be very interesting to test on this kind of large area scanning.
@hollisatlarge
@hollisatlarge Ай бұрын
Thanks Olli. Another great video. I started my 3D modeling using Reality Capture, and no, merging components isn’t fun but it’s doable. (Filming the transition from outside to inside may help reduce the number of components.) I’m also wondering if using your drone to capture the three levels on the outside would help the quality. With that being said, RC was the only application I could get multiple cameras to integrate together. I’m so appreciative of your efforts in understanding GS processing and thinking outside the box. Knowing that RC does a slightly better job in creating a sparse cloud will definitely have me changing my procedures a bit. Thanks!!! Btw, I’m finding that no matter how well images are extracted from videos, they never give me better results than using original images from a camera. Images from videos not captured with proper shutter speed and frame rate also can also make things worse. In the end, I got used to taking all my images with the drone’s camera even if I just walk around holding the drone to take pictures. The nice thing about walking with the drone to take pictures is that it has a leveling gimbal. Cheer!
@lucho3612
@lucho3612 Ай бұрын
love your channel
@pedrogorilla483
@pedrogorilla483 17 күн бұрын
Were the 1000 pictures used in Reality Capture the ones that PostShot judged to be important? Maybe that’s why RC struggled.
@alexmoshkin7977
@alexmoshkin7977 Ай бұрын
Had 100% same experience with learning GS😂 interesting point Olli, that I compared RC with Metashape at the same complicated conditions (sunset + drone), and metashape aligned components from the first attempt and without adding control points
@imrsvhk
@imrsvhk 23 күн бұрын
Great vid! Myself, super familiar with Photogrammetry over the years> Just about to dip my toe into GS. your vid was a great first watch. Keep up the good work!
@johnw65uk
@johnw65uk Ай бұрын
Well put together video thanks. One thing you should try is a comparison on capture types. I’ve tried video vs photo a few times. Better quality with photos vs the time it takes to capture. Depends on what you need from the end results really. If I need accurate dimensions will always go with photos for accuracy But sometimes it’s surprising that you can get good results from just a few photos from different angles for a rough model. Worth trying out on a simple object like a cardboard box on a bright overcast day. Think it would be interesting experiment.
@codeAndmuscle
@codeAndmuscle Ай бұрын
Nice video, I agree that sparse point cloud do give gaussian a better initial position for places that's hard to view at varity of angles (like grass on the ground).
@nealmenhinick
@nealmenhinick Ай бұрын
your videos are great
@interactingarts
@interactingarts Ай бұрын
Fantastic video!
@deniaq1843
@deniaq1843 Ай бұрын
First one! :)
@boskobuha8523
@boskobuha8523 Ай бұрын
Bravo!
@3d360grad
@3d360grad Ай бұрын
Hello Olli, thank you very much for your research and detailed description of the project!
@Lucky-ui7dh
@Lucky-ui7dh Ай бұрын
You are amazing bro. Now I gotta try reality capture!
My observations on Gaussian Splatting and 3D scanning
16:32
Olli Huttunen
Рет қаралды 32 М.
Use 3D Gaussian Splatting In Game Development, What The Internet Doesn't Tell You | GDC Talk
28:58
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 7 МЛН
Inside Out 2: BABY JOY VS SHIN SONIC 3
00:19
AnythingAlexia
Рет қаралды 9 МЛН
Ozoda - Lada (Official Music Video)
06:07
Ozoda
Рет қаралды 12 МЛН
3D Scanning Wand - Gaussian Splatting from 3 different cameras
11:31
Road map of Gaussian Splatting possibilities
14:48
Olli Huttunen
Рет қаралды 10 М.
Cleaning Gaussian Splatting models in Postshot
6:51
Olli Huttunen
Рет қаралды 3,1 М.
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
20:19
Cole Medin
Рет қаралды 94 М.
Tutorial: Efficient Gaussian Splatting | CVPR 2024
26:30
Forrest Iandola
Рет қаралды 3,9 М.
Photogrammetry / NeRF / Gaussian Splatting comparison
23:30
Matthew Brennan
Рет қаралды 169 М.
3D GAUSSIAN SPLATTING with POSTSHOT (and Point Cloud Editing)
28:12
I turned a 3D Gaussian Splat into a Detailed 3D Render.
13:17
Dusty Pixels
Рет қаралды 11 М.
Как подписать? 😂 #shorts
00:10
Денис Кукояка
Рет қаралды 7 МЛН