Fantastic as always! Minor note on your remarks of dense point clouds: You state the limit is around 1 gig. Sounds about right but one can be more precise: the limit is the same as the upper limit of the max splat count in Postshot. With my configuration i can train models up to 9 million splats and as long i make sure the dense cloud I start of with is no bigger then 9 M points, it works. You probably knew this, but it might help others.
@DavidHeath-u8mАй бұрын
Olli, love watching your videos, well produced and on focus concerning many of the questions folks are asking about 3DGS. XGRIDS is onto something with the hybrid combination of point cloud and 3DGS data. Exciting times we are living in!
@pedroantonio5031Ай бұрын
Your content is so good and well produced!
@erickgeislerАй бұрын
Great video Olli. I just got the xgrids scanner and the output is pretty great. The capture time is really fast. I think your research is spot-on. Sparce points are the chicken-wire everything is built-on. My guess is the more even the distribution of points along the surface, the better the result. Time for some testing...
@Grigoriy360Ай бұрын
Thank you, Olli. Like always a lot of new information to think about.
@remedyteeАй бұрын
Interesting indeed, Olli. Thx!
@bottledwaterprodАй бұрын
Camera tracking for average ppl is possible now though! It's still new so I forget the company making it, but through an app and connection interface, you can use an iPhone's cameras, sensors, gps, and lidar to become a position tracker for any other camera and lens you may own. It was demoed at NAB with the goal of making virtual production in Unreal 5 more accessible to enthusiasts and indie filmmakers. I bet that's the missing piece of this puzzle. Ever since I got into photogrammetry, I wondered why I couldn't track my camera's position when it seems all the tech we need to measure and make use of such data is right there.
@cem_kayaАй бұрын
Do you remember the name ?
@GoceMilanoskiАй бұрын
@@cem_kaya i believe the app is Jetset Cine by Lightcraft Pro
@AndriiShramkoАй бұрын
Thank you.
@randomthing999Ай бұрын
Cool video, very instructive, thank you! Though, I dont really see that as a feature but more as an imprecision. Looks like I would just have to put one spare point and it will compute all the rest. I hoped that inputing a perfect sparse point would help him be less noisy but with its way of doing things, it doesnt change anything. Do you know if there is a way to decrease the splatting in the splats related to the sparse points? To tighten things up?
@sander-witАй бұрын
Hi Olli, I haven't yet watched the full video yet or tried your setup in Blender for creating an array of images from around an object, but seeing your setup at 2:50 I noticed you're using an uv-sphere for the positions of the cameras, but I think it would be more efficient to use an icosphere in stead because the cameras are than evenly spaced around the object where you have a lot of camers close to eachother at the poles of a uv-sphere.
@OlliHuttunen78Ай бұрын
Watch the video.
@francescoluciani3397Ай бұрын
Fantastic tutorial!!!!
@edwardverbree9448Ай бұрын
Great video, again. Thanks. Would be nice to check the created .ply file as 'just' a point cloud and count the number of points. Are there points created to represent the skull? Are the point on the arm the same as the synthetic SfM points?
@tiagogiraovideosАй бұрын
Superb!
@vfxperson407316 күн бұрын
This is very interesting, I have been trying to replicate your results with my own data but keep running into an issue "Image Set Point Cloud is empty" when trying to start the training in postshot. The point cloud is showing within postshot and looks correct so not really sure what to try. I created the pointcloud of the model following what you showed in the video using geometry nodes, and then exported as ply (binary) from blender.
@epkostaringАй бұрын
Love your content 🔥
@perkristianfaldet755122 күн бұрын
I also did som research on this topic a qouple months ago, with photogrammetry (cams and pointcloud) exported from RealityCapture. Starting-point for the GS and SCALE was my conclusion.
@kasali2739Ай бұрын
Thank you! This is what i was waiting for. What is If you have a 3d synthetic Data AS a House With separate rooms and you produce pcloud by scattering Points ON geometry, how can a Training process know what picture to Match to what Points, since room structures With corridor AS whole are Not that continues space AS one single room is? I watched old Tutorial ON doing that With Kinect device and it used some Kind of GPS i belive
@shiccupАй бұрын
good to see
@davorbokunАй бұрын
This is great analysis, thank you! How good are Gaussian Splats at representing volumetric data, like layered fog, or a volumetric hologram? Something without well defined surface. Also, a bit related question, does the algorithm get confused if there are many points in places where there isn't anything in the scene? For example if you were to include the skull in the point cloud but omit it in the renders?
@AndriiShramkoАй бұрын
I've noticed the following issue and can't figure out how to resolve it. For example, when I create a point cloud and align cameras in RealityCapture, I always get worse GS results than if I align the cameras and create the point cloud in Postshot Colmap. I've tried all possible RC settings and created various point clouds, but I have never achieved a result better than Colmap. The result is always worse. What could be the reason for this? If point clouds aren't that important, then why is the GS result always worse when we import the camera alignment and point cloud from RC? I've done hundreds of different tests and still haven't found a way to create a GS of the same quality as with Colmap. Does RC always produce worse camera alignment than Colmap?
@OlliHuttunen78Ай бұрын
Hi Andrii! That's interesting. Maybe Postshot creates a better Structure from Motion result because COLMAP is integrated and optimized centrally in its process. It certainly manages to align the cameras better than RC, but that calculation phase is very long and takes significantly longer than in RC. Although in RC you have to spend significantly more time when you have to combine separate components. Someone praised that the new version of Reality Capture would be significantly better. I have to try it. You build very cool scanning rigs by the way. I've been amazed at how many GoPro cameras you've managed to connect to them. And the results look really cool too. Good job!
@oeblehАй бұрын
Testing the XGrid K1 right now , results are amazing. In rendering there is no time advantage because of the Lidar pointcloud by the way. ( or our models are too complex :-) )
@davekite5690Ай бұрын
That was fascinating - I wonder how it might work, if recreating virtual exteriors with lots of 3d geometry (eg 3d poly trees )?
@josealberto-rj1siАй бұрын
Hi, great video, one question, can you transfer a scan from Jawset to Blender? With an addon, I don't care about the texture and it doesn't look that good either. If possible, can you make a tutorial? Thanks, and sorry for my English, it's the translator.
@OlliHuttunen78Ай бұрын
No. Not from Postshot. But if you want to convert 3dgs to mesh you should check out the new features in Kiri Engine. They have that option in their service. Check this video: kzbin.info/www/bejne/f5m1o5J7n9OUjbMsi=DJU4ApbOplSpvqUE
@josealberto-rj1siАй бұрын
@@OlliHuttunen78 Thanks, it worked for me
@jorcherАй бұрын
nice!
@mousatat7392Ай бұрын
The reason that the skull appeared while it does not exist in the point cloud is the images has it, so the algorithm shifted some points from the hand to the position of the skull to reconstruct it, this is not so efficient it will reduce the quality of the skull and the hand a little bit, but still works enough.
@r.m8146Ай бұрын
It'd be awesome if you could review XGRIDS.
@taavetmalkov3295Ай бұрын
isnt the pointcloud a frame to which the splats land on? hmm
@mousatat7392Ай бұрын
The problem in sfm is not the sparsity of the point cloud, but the disability of localising the images with low texture and details. Nowadays VGGsfm solves this issue but still needs 80 GB gpu to run it.😢
@edh615Ай бұрын
what about ace0?
@StevenClark-r9rАй бұрын
Thanks for sharing such valuable information! I have a quick question: I have a SafePal wallet with USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How can I transfer them to Binance?
@realhamza2001Ай бұрын
I know this is unrelated but give the Quran a read, also have a good day :)