Think the problem with 360 video is you are walking in a low light environment (underground car park) so there must be issue with noise from the sensor , motion blur and also video compression . All influencing the final output. I would personally get a decent dslr and prime lenses and a monopod with a remote shutter release and just work your way round. It will take hours rather than minutes but the quality would be much cleaner results. Thanks for sharing
@misiurkaful3 ай бұрын
Yes, that's the most reasonable thing to do, but as you say, it'll take hours rather than minutes. I love the 360 camera approach in that it allows to quickly capture a scene to have it recreated later in vr. Kind of turbo holiday photos :) I know there are still limitations, but I'm experimenting, and will share if I get anything nice. Olli, your videos are great, thank you.
@pixxelpusher8 ай бұрын
This is a great summary of where Guassian splatting is at. How about instead of shooting video you set the 360 camera to take a timelapse of say 1 photo every second? That way you'd end up with a lot less images. 10 minutes would be 600 images.
@pedrogorilla4836 ай бұрын
You can also speed up the video or extract key frames.
@pixxelpusher6 ай бұрын
@@pedrogorilla483 True but it's a bit more work to do. Timelapse photos would also be much higher resolution too.
@hanskarlsson37788 ай бұрын
Excellent Olli, I tremendously appreciate your presentation style and easy-to-understand narration. We here in Japan are looking hard at incorporating Gaussian Splatting into our cultural heritage work. Really useful, I wish you get a million followers - at least :)
@deniaq18437 ай бұрын
Hello Sir :) You say you are looking forward to integrate it more into your cultural heritage work. That sounds so interesting but yet i can not imaging what you mean exactly. Woule you mind to share some more information about your thoughts? Greetings from Germany:)
@hanskarlsson37787 ай бұрын
@@deniaq1843 Hello, well, many objects and buildings we recreate in VR exists somewhere outside. Right now we are proposing to recreate a wooden art sculpture that stood outside. We will need to model it, as it doesn't exist anymore, but the surrounding area with grass, trees and bushes, little hills etc. would be a nice project for gaussian splatting, as it works with vegetation.
@deniaq18435 ай бұрын
Can you share something of your work with me?
@KalleKarppinen8 ай бұрын
Great tutorial once again! It's always nice to hear your input on these matters and follow your experiments.
@taavetmalkov3295Ай бұрын
i am truly amazed of the gaussian method...
@itakka-v9pАй бұрын
Thank you, Olli! I always love watching your interesting videos.
@AndrewMaximov2 ай бұрын
What a wonderful video Olli! Thank you so much for the info. Please keep it going 🤍
@violentpixelation54868 ай бұрын
Thank you so much for doing this RnD work for all of us interested in this technology! 👍💯⚡
@nathanjgl8 ай бұрын
Awesome work Olli! Thankyou for sharing these insights!
@DroneSlingers8 ай бұрын
Hey Olli, I got a question for you about how PostShot trains the models during the last step. Did you notice that the more ksteps you train it to, the smaller the file size becomes? The only reason I can think for that to happen would be as it refines the model it's possibly reducing the rouge splats and trimming down existing ones. I've done several test so far, you can save and export the model at any point during the training without interrupting the training. So during a 90kstep phase I exported at 30k-60k-90k and each file was decently reduced in size each time while also being clearer. Because I'm using a ShadowPC I unfortunately can't leave things running over night because after 30min inactivity Shadow disconnects you, so I was wondering if you had the chance could test if there is a limit to how small the file will get depending on the amount of ksteps ran? The difference between 90k and 300k is a lot but would be great to know if a file size is too large that I can just keep training it to reduce the size to where I need or if at a certain point it begins to increase in size.
@KalkuehlGaming8 ай бұрын
Thank you Olli for all your updates on Gaussian Splatting. You are my favorite youtuber on this. Could you make a video on ways on how to get an interactive 3D viewer on your own website and import an edited gaussian splatting file? I am a little bit worried to use third party websites to imbed the viewer into someone elses website.
@LennartHinz-z8z12 күн бұрын
Great video as always Olli! Thanks a lot:-))!
@gristlelollygag5 ай бұрын
Do you think some kind of ML / AI could be used for object detection and then extraction from the Gaussian points? If so, one could have a large database of environments like this, with objects extracted out. Labelling of the objects and environments with tags could be done, and then with some kind of vector embedding you could have text-input-based genAI for creating 3D models.
@ggill13132 ай бұрын
Wow. The accuracy achieved here, especially for architectural feature extraction, looks superlative.
@jorcher8 ай бұрын
keep going! you are a shining star in the GaussianSplatting communtiy (if there is such a thing:D)!
@vidayperfumes75148 ай бұрын
Thank you for all of your advices , they are really useful.
@mankit.mp48 ай бұрын
Wonderful insight Oli again learnt an awful lot from your sharing please keep it coming, big fan. x
@joelface8 ай бұрын
Great video Olli! I appreciate the website recommendation (radiancefields) and will check that out. I think your latest result with 300K iterations on the parking garage turned out near perfect! I hope to see gaussian splatting used for "spatial video" one day, where a small array of cameras record a scene from multiple angles and compile a frame-by-frame spatial video that can be explored from any angle in virtual reality. In the meantime, creating photo-real static environments is extremely cool.
@shumailrizvi76252 ай бұрын
love this honestly, thank you for so much knowledge !
@DanVogt2 ай бұрын
Your thoughts were very useful and inspiring. Thank you
@Aguiraz7 ай бұрын
Great video, this is the same process I would have followed so refreshing and helpful to see it already done. Thanks and keep them coming dude!
@rikvandenreijen8 ай бұрын
Amazing content. Thanks for helping us stay up to date with actual practical implementation of Gaussian splatting! Keep up the great work!
@hgatmit3 ай бұрын
Great insight on the GS capturing process! This is one of the best!
@mn041473 ай бұрын
with only this single video i learn so a lot thank you i hope i can make a great scene like you
@360_SA8 ай бұрын
This is amazing video I love taking videos with 360 RS 1. I would like to know how did you come up with 18 videos? is it from one video or 3 videos or more. and how did you select the area for the square videos. Thank you I like to watch your videos because you give us the best explanation in the shortest time so we will not get lost.
@excurze73776 ай бұрын
I was thinking the same to be honest, i tried capturing a Garage too but it always ended up messy. The Camera tracking was off everytime. Maybe i really should switch from my Phone to a 360.. im not sure what im doing wrong
@joannemagi6 ай бұрын
Thank you for the video ! It's glad to see such much new discoveries about gaussian splatting.You help me a lot🙇♀🙇♀
@hp6511068 ай бұрын
I Love your channel with Gaussian splatting.👍 I look forward to every video you post.
@breadslinger2718 ай бұрын
I have already made an interactive game like tour of 2 parts of my home using splats and Spline.desgin straight from a web browser. I'm currently working on making full property game tours for real estate
@nitisharora415 ай бұрын
It's very inspiring to see you experiment like this. Thanks for sharing your knowledge. Regards, Nitish
@chrisfaber99267 ай бұрын
This is really good information about shooting and processing. Thank you very much 🙏
@oonaonoff48784 ай бұрын
watching all your vids on gaussians🔥✨
@tribaltheadventurer8 ай бұрын
Thank you so much Olli🙌🏿
@MrCatoblepa7 ай бұрын
What an amazing video! Thanks a lot, you provided a huge amount of very useful insights.
@gaussiansplatsss7 ай бұрын
postshot with high iteration or Luma Ai with upgrades... Have you compared them with the same video? Which is better if ever🧐
@FredPauling8 ай бұрын
I've often wondered if 360 cameras would be suitable for this application. Thanks for sharing useful tips to make it work.
@ns1948 ай бұрын
Hi Olli, great video, very informative! Two questions about your process: why not use the 360 camera in several positions to capture stills instead of video? And, I take it that you exported framed videos from your 360 camera (i.e. not equirectangular 360 videos)?
@OlliHuttunen788 ай бұрын
Thanks. Yes still images would be better solution. But it takes just more time. Equirectangular image would be nice if it would work with postshot but perhaps in future.
@ns1948 ай бұрын
@@OlliHuttunen78 Indeed, it would be great if the program could parse spatial metadata - certainly a time saver for capturing 3D environments!
@dinoscheidt8 ай бұрын
While it would be mathematically quite straight forward to slice equirectangular images for processing…. my question would be: whats the benefit? You would be in the frame (someone needs to hold the thing) making it quite useless 👀
@ns1948 ай бұрын
@@dinoscheidt not necessarily, you can leave the frame entirely if it’s a static shot and the tripod can be masked out fairly easily. But if it’s a moving shot, that’s another story. At that point though, the benefit of a 360 camera compared to a high quality video camera of any kind becomes negligible. You end up framing your coverage in post instead of during production.
@bradleypout18207 ай бұрын
nerfstudio lets you do this. Equirectangular images just ns-process-data in nerf and drag the files to postshot. Processing normal images or videos in colmap yourself for me always produces way better nerfs and splats or tweak the settings in studio. Postshot processing data not that great yet. @@OlliHuttunen78
@juanignaciocaballerogarzon977Ай бұрын
Great work! And super cool video
@tobias.sieben.3608 ай бұрын
Thanks, Olli. Again a great video that I really enjoyed. Keep on going
@rockybalboa80857 ай бұрын
Dear Olli, thank you for your amazing tutorials - learned a lot from them! A bit dissagree about the resolution: on the bigger scale drone shots it makes sense and brings so mush details with 2000K interations :) 3800px is a bit too much, but 2400-2800 works much better than standard 1600. 1400px didn't even try.
@OlliHuttunen787 ай бұрын
Ok. It is good to know. I need to try that.
@rockybalboa80857 ай бұрын
@@OlliHuttunen78 Do you know how to merge few gaussian splats .ply into one bigger? Tried to import them into Postshot, but it seems that this is not possible with this app :(
@robmulally8 ай бұрын
Good info I been experimenting as well and have come to similar conclusions. 8k 30 fps in my phone and not over shooting can work better than a really long video , but great to know you can chop out parts and use as multiple clips. Do they have to be the same camera lens..
@OlliHuttunen788 ай бұрын
It is recommented that scan material should be recorded with same camera but I haven’t try it. It would bee cool if I could scan first with drone and then walk in ground and scan with another camera.
@robmulally8 ай бұрын
I could not find how to add second video it only seems to accept the first @OlliHuttunen78
@EricpareАй бұрын
Thanks, that's super useful :::))
@buroachenbach7038 ай бұрын
Hi Olli, Great insight into Postshots Settings - do you have similar info on the resolution of the images? I keep it low because I’m worried about the memory but have you tried the difference between 1600 or lower or even higher than that? Does it the quality as much as the training steps? Regards Kai
@OlliHuttunen788 ай бұрын
Yes. The resolution is interesting thing. Gaussian Splatting training performs better on lower resolution. If you put higher like 4K images in training will choke and it takes huge amount more time to complete the calculation. So if you are doing the training with postshot it is good to stay in default values that it offers when you import images into it.
@DronoTron8 ай бұрын
Great video and thanx for sharing your thoughts
@fireum928 ай бұрын
Love your videos! Thank you so much for your time and effort
@liuksataka7598 ай бұрын
Thanks for the information!
@mishadodger7 ай бұрын
Hi Olli, thank you for your posts, I'm doing a project for the university with Gaussian splattings, and I'm also trying to make a scan. The problem I'm having is it's taking very long; what format are you using when making your videos? I've done some 8k videos with my Samsung s23 of a small room; all the process is running ok but is getting stuck when is reaches 10k steps from that point is barely moving, and the time is jumping from 30 minutes to 20+ hours, I have a 4070 card so should not be the problem with the power, I was thinking maybe my videos are bad? should I make maybe FHD 60fps? please advise if you had this issue before. I changed the number of shuts from 400 to 200 and still is moving slow after that 10k
@OlliHuttunen787 ай бұрын
You should lower your image resolution radically. 8K images are choking the traning process. Use for example 1920x1080 resolution images. I made my tests in 4:3 aspect ration whre resolution was 1440X1080 and it seems to be quite good format for gaussian splatting training.
@mishadodger7 ай бұрын
Thank you for coming back to me; I will also try that.
@DLVRYDRYVR8 ай бұрын
Thank you 👍
@leighemmerson2 ай бұрын
This is super interesting and inspired me to have a go. However I cannot see the 'natural' setting on the Insta 360 software. Anyone know if it's been removed?
@jesusjaar19373 ай бұрын
Hi, great video!! How did you scale it to fit with the model? Is there a way to add real world measurement, or it's just eyeballing it?
@punio46 ай бұрын
A question for capturing with a smartphone camera: I know you should lock the exposure, ISO, and WB, however should you also lock autofocus? I can't imagine you'd be able to take good photos of an object with a fixed focal length.
@OlliHuttunen786 ай бұрын
Yes. Naturally focus should be sharp. the more sharp images without DOF you can get is better for good 3D scanning.
@nealmenhinick3 ай бұрын
Hi! helpful tip I've noticed when doing splats and photogrammetry. Do not use an Iphone!!! the default camera stabilises the footage, warping in mysterious ways that causes the underlying camera matching algorithm to perform very poorly. I've found very good results with a ultra wide lens for a dslr - requires fewer images because more features points are accepted
@MatDeuhMix8 ай бұрын
Thank you for the video !
@echobass3D7 ай бұрын
Great video thank you. What’s the reason for using video over a series of stills? Surely you could just walk around taking a shot every few degrees of movement? That way you can set the shutter speed to avoid motion blur fix your depth of field and avoid taking so many frames in the first place.
@63pixel8 ай бұрын
Thx for this! I used photogrammetry most of the times for my 360 images. But will now head to Posthot and give it a try! I´m curious how the images from the insta360 Pro2 will works. Its helpful that for each lens the images are saved seperately.
@OlliHuttunen788 ай бұрын
Wow! You got quite cool 360 camera. Hope that it’ll work with Postshot. Remember that images with no fish eye distortion works best.
@jmalmsten8 ай бұрын
Now, these techniques look very promising indeed. :)
@hollisatlarge7 ай бұрын
Hey Olli. If you had to do the car garage project without a 360 camera and just took regular video or images, how would you shoot the videos (or take pictures) that postshot uses to train? Do you take a bunch of videos circling different areas of the car garage? I've really only captured aerial images and used postshot for exterior 3d models, I've haven't done interior yet. Cheers!
@anidea80128 ай бұрын
Hey, thank you for your awesome guide about this postshot. I'm a newbie here and I have some doubts. I understand that the original Gaussian-splatting project expects unique hyperparameter setting for each and every individual scene, but how come postshots can be able to give good outputs without any tunning?
@gerardosanchez60458 ай бұрын
really amazing video, im doing 3dscan enviroments and start using 360 cam for this, but havent achieved great results in luma ai, maybe this is the solution. Cheers
@gerardosanchez60458 ай бұрын
One question, how do you export the video from insta 360 to postshot? - 360 type - reframed
@jorgeviloria43157 ай бұрын
Amazing vid dude! Thanks for your time and effort to GS research, i love those techniques. Im sorry which 3D software is best to place the detailed models? thanks a lot Olli
@orbitall3d-capturadarealid7747 ай бұрын
Hi guys. I have a question, maybe I'm completely wrong. I would like to know if it is possible to obtain a visualization of a point cloud obtained by LIDAR, similar to Gaussian Splatting. I've always worked with the process of capturing images in the field with a laser scanner, so I know practically nothing about the technical side of files and programming. I understand that the images obtained in the 3D scanning process are used to colorize the points. So I thought I could skip the training stage with images, and use a point cloud obtained by the scanner (I think it's more accurate and noise-free). I apologize if what I've said is nonsense, or if it already exists. TIA
@sharpvidtube8 ай бұрын
Maybe timelapse mode would be ideal for making these?
@OlliHuttunen788 ай бұрын
Definitely! I thought about it too, but I haven't tested it in practice yet. Timelapse is a good idea!
@christianblinde8 ай бұрын
What kind of system do you have, that you can process 1000 images? My system struggles when i go above 300 images. Ist there a way to get around the system limitations? Like processing parts an merging them?
@OlliHuttunen788 ай бұрын
I have Nvidia RTX 3070 Graphics Gard with 8gb vram and CPU is Ryzen 7 and 64gb RAM. I have also thought about whether the scan could be done in parts. Yes, it is certainly possible. The difficulty arises from the fact that training always creates models on a different scale. It is quite difficult to get them to fit with each other, but it is not impossible.
@ArchitRege3 ай бұрын
What if the iPhone video based Gaussian Splat is being fumled because It was shot with Default iPhone Camera app, which has auto White Balance, Auto Exposure, Auto Brightening applied to the final footage causing the glitches As seen in the video, the light at some place in the parking lot goes from warm to blue. Maybe to retry this iPhone shoor with a finalcut camera app and manually set exposure, and white balance. Or using Remini Pro
@harshbachhav2573 ай бұрын
great video
@VerumBit8 ай бұрын
Thanks for mentioning! ;) Check the links in the description, mine and the Overhead are not working :)
@OlliHuttunen788 ай бұрын
Hey! Good that you noticed. Now the links are fixed.
@panonesia8 ай бұрын
can you make videos how to build simple game from gaussian splatting scan? basic level is okay...
@VerumBit8 ай бұрын
@@panonesia You just need to add collisions because gaussian have none : set a floor plane then make many cubes, each one with same dimension of objects (cars, walls, columns, roof, etc). The material of these cubes must be transparent (or set them hidden in game). Finally put the player&enemies inside the scene and play. This is what I did, nothing more.
@박재호-z6s2 ай бұрын
Why does this happen if the post shot goes down while processing? 1000 images, setting is default and not changed
@trollenz8 ай бұрын
I recommend subscribing to YOUR channel 👏🏻👌🏻
@Meat-N-Fries8 ай бұрын
Amazing content as always!
@lucbaxter698 ай бұрын
Hello Olli, Thank you for all the Videos. I try to work with postshot... can I export the croped model as a PLY? If I export the model have always the same size. Greetings André
@ya3d8 ай бұрын
Excellent Olli !!!
@omardrservant6 күн бұрын
What's your recommended tool/software for generating 3D of actual cars?
@OlliHuttunen786 күн бұрын
Postshot is a great software to produce Gaussian Splatting models. For cars good and even lighting situation is important. However, I would avoid very bright and wide lights, which are often seen in the reflections of the car's metal surfaces, like in some car shops. A normal cloudy day in a parking lot where you have plenty of space to walk around the car with your camera and photograph it as comprehensively as possible from different heights, that may be the best option. You should start with the phone's camera, for example. if you have an iPhone pro it has an excellent wide angle lens. That and record video at 60 fps. then there won't be too much motion blur.
@paultoensing31268 ай бұрын
Love it!
@NervusOne7 ай бұрын
I've downloaded the UNITY pluggin ...but I just see the point cloud...is it under RGB ?
@antons61468 ай бұрын
Thank you so much for explaining and doing all these test, I had a question would it be possible to feed larger resolution still frames plus the videos from the dark areas to improve quality.
@OlliHuttunen788 ай бұрын
Well the high resolution of the images are not the answer to create more accurate Gaussian models. It just choke the calculation and takes much more time the higher resolution images you put in to it. Images just needs to be sharp and there should not be lot of noise or motion blur.
@gristlelollygag5 ай бұрын
Very cool!
@Grigoriy3608 ай бұрын
Great tutorial, thanks to you video I started experiment with Gaussian Splatting. I do a lot of photogrammetry, what is the best way to combine it with GS ?
@OlliHuttunen788 ай бұрын
To 3D scan Gaussian Splatting model is very similar process as a photogrammetry. Same rules applies in here except you don’t have to avoid transparent and reflective surfaces. But GS works best when you scan enviroments. Smaller object are challenging to capture.
@Grigoriy3608 ай бұрын
@@OlliHuttunen78 Thank you!
@Edensproject3 ай бұрын
hey, just to see if i understood correctly, i just bought a 360 x4 to scan better splats and everything, so your method for scanning is finding the relevant timeframe in the video and export it in a 4:3 ratio with no distortions? and you try and do that for both lenses of the camera? and then you convert the video to images? in the case of working with a mac, you think polycam will be better than Luma?
@OlliHuttunen783 ай бұрын
Yes. You need to dismantle and crop 360 footage to smaller videos. 4:3 ratio is what I have found to work best. And lens distortion needs to be corrected. Fisheye distortion won't work. Postshot can take videos directly. So can Luma and Polycam. Although these web services are not as good to handle separate video takes as Postshot is. To get good result with Luma AI and Polycam videos need to be one continues shot.
@Edensproject3 ай бұрын
@@OlliHuttunen78 thank you for the comment! and if i use polycam/luma with one single long video, should it also be 4:3 without any distortion? what would be the best approach for that?
@studiodevis8 ай бұрын
Kiitos Olli! Fantastic videos about GS. I have a Canon R5c max video rez 8K with 50p RAW and a 360 camera -QooCam 8K. Which of these two cameras do you recommend me to use ? I read in the comments that PostShot works better with lower rez videos (max 4k). Depends on the PC hardware? Keep up the good work!
@OlliHuttunen788 ай бұрын
Well 360 camera is more effective because it covers larger view and you get so much with one shot. And the resolution is not the important aspect in Gaussian Splatting.
@TheInglucabevilacqua7 ай бұрын
@@OlliHuttunen78 thank you Olli, I was just wondering about the influence of the source images pixel count on the final result, it seemed to me likely that - especially if one strikes the best balance for the number of source images and the number of iterations the pixel count in the source material can become a factor again... should we wait for future improvements in the algorithms and in the GPU hardware for it?
@alejandrocapra86548 ай бұрын
Make a tutorial to export from gaussian to unreal and basic questions!
@melankolistaja37928 ай бұрын
How about using google streetview as source material?
@grafpez8 ай бұрын
fabulous thanks! ;-)
@jafilm34888 ай бұрын
Great video.
@AdrienLatapie8 ай бұрын
this is awesome, would you recommend to buy that 360 camera or could you achieve the same results with an iphone? what if you have a really old 360 camera, the training frames have to be high res?
@OlliHuttunen788 ай бұрын
Well yes the 360 camera is very handy, but you can get same kind of results with phone camera too. It is not about resolution. Images just needs to be sharp. Im counting these GS models from 1440 x 1080 res images. Wide lens where is no fish eye distortion is also useful. At least you can try your old 360 camera as well.
@MrGTAmodsgerman3 ай бұрын
Can PostShot make a mesh out of the splats?
@OlliHuttunen783 ай бұрын
No. Postshot is designed for radiance fields and you can produce only NeRF and Gaussian Splatting models with it.
@MrGTAmodsgerman3 ай бұрын
@@OlliHuttunen78 Oh sad. It seems to be the only easy to run software for these splats that normally i would have to get by LumaAI where you can get a mesh.
@nav-unger8 ай бұрын
Thanks. You doing great stuff.....
@Healthy_Toki7 ай бұрын
I think an array of 3x 360 cameras mounted on a stick at different heights all connected to a remote shutter button that you manually activate. This will allow all photos to be taken fairly 'still' between stick movements so details can resolve with minimal motion blur as well as speeding up time for a full coverage scenario. Anyone try this yet?
@OlliHuttunen787 ай бұрын
I have had a similar idea. Such a 360 photography stick would be reasonably easy to build. it would only take three 360 cameras. I only have one. Would insta360 sponsor such a scientific experiment?
@joshdavidson51535 ай бұрын
I tried this last week with a single camera and just went around the room three times at three different heights to gather the same photos. I used the insta360 x3 with hdr at 5 exposures. Still playing around woth processing, but I'll say it isn't going great. I brought a dslr and it's way cleaner and registered in Reality Capture way faster. Any ideas to modify the experiment? I'd love to figure this workflow out.
@dailele8370dai8 ай бұрын
Why is it that after I register, I can't bring up the startup screen for the software,postshop
@hanskarlsson37787 ай бұрын
Olli, I am mystified by your workflow here. In 360s you appear in the shot. But you reframe to 4:3 and output a section of the full 360 as I understand it. However, do you output several videos in that format from Insta studio to get all angels (except the parts where you appear yourself). If so, do you frame them so that you have overlap? I would really appreciate more details about the steps in Insta Studio. I also don't know how to get the menu you show when you talk about how to get rid of fisheye distortion?
@hanskarlsson37787 ай бұрын
Sorry, I figured (most of it out). You are setting keyframes in Insta Studio (press the plus button). I put one at the start, then copied and pasted to the last frame to get the same values. I then captured several reframed shots from the same 360 video, trying to cover everything using a bit of overlap. I walked the same route back and forth to get frames of everything without me in the shot. Now trying this, hoping it's the right workflow :)
@OlliHuttunen787 ай бұрын
Yes. The 360 camera has the awkward feature that you, as the photographer, will inevitably be included in the pictures if you use it to shoot a video with a selfie stick. I simply photographed the scanned material so that I limited one 4:3 view to point to the left and one to the right. I didn't render an image that points forward or backwards, because then the stitching seam will be in front and I myself will be behind. So in one round I was able to photograph two directions at once. And I did three of these rounds, each from a different height. At the highest rotation, I turned the viewing angle slightly downward so that the floor could be seen, and at the lowest rotation, correspondingly, slightly upward so that the details of the roof could be seen. Hope these tips help.
@hanskarlsson37787 ай бұрын
Follow-up: I followed your method, using my little Insta EVO. The scene was a path with blooming cherry trees here in Japan. Results were bad, which I blame on the choice of subject. There was no detail in the cherry trees at all, they look like an impressionist painting. I think this kind of subject requires photographs, not video, and a high-grade camera. I also just did one height, which was a mistake. On the other hand, to capture such a large scene with a mirrorless or DSLR camera is a huge challenge, not the least because of the height of the trees. The camera is too heavy for a drone (unless it's a really expensive drone). Interesting to note that buildings in teh scene look decent, so it seems that your method is suitable for "hard" subjects, like your garage with cars. Vegetation and flowers etc. is tough, I think partly because cutting out a small piece in the 360 video like you did here results in very low resolution. You can notice this in 180 VR videos, which look much sharper than 360 from the same camera (like my EVO), because the same amount of pixels only need to cover half the area. But you are 100% right, this is a field that need a ton of experimentation and experiences for success!
@boskobuha85237 ай бұрын
I have issue with postshot software or some settings. I have about 200 images and train gaussian splatting. Everything goes fine, result seems to be exellent and when training was finished results was very bad and messy, useless. With 30.000 processes. What's happened? Do you have an explanation?
@Connormp4gaming8 ай бұрын
I was wondering how many images I should use for a scan of my street with 54000 frames
@bradleypout18207 ай бұрын
as many as you want to. just make the images smaller if you want to use more? but your limits are your pc and your time lol. Put as many as want in these programs you will soon find out if the it crashes or lags that you have put too much data in!
@NervusOne7 ай бұрын
Olli or anyone with the answer 😁. I'm a newb with Unreal and/Blender I come from a filmmaking background. Can you help me understand how to use the PLY file from JAWSET and use it in either blender or UNREAL but where I can see the colors ! Now I'm only able to see the point cloud and the camera point....
@OlliHuttunen787 ай бұрын
Well. In Unreal you can use Gaussian Splatting PLY files with for example Luma AI plugin. And that you can found in Unral Marketplace. In blender it is little bit difficult. There is one addon that can display Gaussian Splatting PLY in blender. It is quite slow because it uses cycles render engine. I recommend to watch my other video where I list these tools: kzbin.info/www/bejne/b6G6fKGvrJxngrs
@HosniElmolla7 ай бұрын
great
@Blackphantom2Ай бұрын
In have an 5070 rtx and Rendering a 30sec Video whit 1000 Images and it Takes 20-30 minutes 2 2-3Hours and to the 3 and 4 i never god 😂 EDIT: rendered 6H 50 Minutes still at camera tracking! and it says 5 minutes left from the camera tracking i hope the rest is way faster and if anywone knows is there any way to render it in the cloud?
@jimj26837 ай бұрын
Someone needs to apply these 3d techniques to google street view!
@GSXNetwork8 ай бұрын
Watch some photogrammetry tutorials to eliminate the guess work.
@dainjah8 ай бұрын
what are your system specs? 4090?
@OlliHuttunen788 ай бұрын
Nvidia RTX 3070 Graphics Gard with 8gb vram and CPU is Ryzen 7 and 64gb RAM.
@dainjah8 ай бұрын
@@OlliHuttunen78thank you. Im currently on 3050 laptop GPU and wanted to figure out how much of an upgrade I need to speed up my splats creation.
@ZoomPhone-f2c3 ай бұрын
LUMA AI
@RafRafaRafRafa17 күн бұрын
Thanks a lot for sharing all your experiments ! Did you tried different radiance profile ? I'm trying nerfXXL to check if it's getting better results but I' m curious to know if you have experimented with that ? Many thanks !
@underbelly698 ай бұрын
Is it possible to convert iPhone 15 lidar scan (not photos) eg. Export as .ply file from lidar scan app