Unleash the power of 360 cameras with AI-assisted 3D scanning. (Luma AI)

  Рет қаралды 75,210

Olli Huttunen

Olli Huttunen

Күн бұрын

In this video, we'll be learning how to create 3D scannings with a 360 camera, using the Neural Radiance Fields technique.
Luma AI offers a great web tool, that allows you to generate 3D NeRF models of videos. And you can scan objects using a 360 camera. By doing this, you can scan objects and the environment quite easily by utilizing the features of the 360 ​​camera. This is a technique that is still in its early development stages, so be sure to check out this video to learn more about it!
Luma AI website: lumalabs.ai/
Get an Insta360 One RS from here:
www.insta360.com/sal/one_rs_1...
#lumaai #insta360 #unrealengine5

Пікірлер: 73
@johnw65uk
@johnw65uk 24 күн бұрын
Tip: Merge the vertices on the model and you can sculpt inside a 3d package without the mesh breaking apart.
@thatvideoguy4k
@thatvideoguy4k 4 ай бұрын
I got to one of your videos while looking for some turntables alternatives, and here I'm watching the 3rd one in a row that have nothing to do with what I was looking for at the beginning, well done mate, you have very engaging and informative videos 👍
@ney.j_
@ney.j_ 9 ай бұрын
Excellent video appreciate the work you put in for it!
@Decoii
@Decoii 7 ай бұрын
Thank you for this. Even the harsh models will be great references in terms of scaling.
@Sigurgeir
@Sigurgeir 8 ай бұрын
This is just brilliant, thank you for the great explanation. I wonder if this method would be useful to scan a bigger environment like a whole street from a moving car to use as a backdrop in a studio recording.
@AClarke2007
@AClarke2007 7 ай бұрын
Keeping us all up to date and realising that 360 isn't just a gimmick any more!
@tribaltheadventurer
@tribaltheadventurer 7 ай бұрын
This is fantastic work Oli, leep up the good work
@TheBFHmontage
@TheBFHmontage 9 ай бұрын
great informative video, just what I needed thanks!
@mariorodriguez8627
@mariorodriguez8627 10 ай бұрын
Great work thank you for the info :)
@user-rv1yo3ww3t
@user-rv1yo3ww3t 10 ай бұрын
Great work thank you for the info :). very interesting!.
@lobodonka
@lobodonka 10 ай бұрын
Nicely described video! Your interests match mine, so, just subscribed! Bring us some more goodies. 👍
@fallogingl
@fallogingl 7 ай бұрын
Unironically the lump looks like the orb from Donnie Darko 😂
@Mauriliocaracci
@Mauriliocaracci 8 ай бұрын
Great! Thanks
@f1pitpass
@f1pitpass 7 ай бұрын
Thank you Olli!
@camshand
@camshand 9 ай бұрын
Love the Car example for typiclaly "impossible" camera moves through windows. I do wonder if putting windows up and down as the camera moves through may trick it into keeping windows up for the NERF scan allowing you to move through the passenger windows in the final animation.
@OlliHuttunen78
@OlliHuttunen78 9 ай бұрын
Good idea. You should try that. Although I think that car should be then scanned two times. Once with the windows open an once with closed. And then compine some how these parts of the model for example in Unreal. Since NeRF creates lumby mesh if something moves during the scanning.
@madedigital
@madedigital 9 ай бұрын
very good info
@notanotherbrick6114
@notanotherbrick6114 9 ай бұрын
Fascinating! Can you look the generated models in a VR headset, as the quest 2? In this case, can you walk around inside the model? This would be a perfect application for that!
@OlliHuttunen78
@OlliHuttunen78 9 ай бұрын
Sure it can be done in Unreal. There is a video from bad decision studio where the guys tests how NeRF models run in VR in Unreal engine. Check it out: kzbin.info/www/bejne/mHzXY6KMidxsZpI
@easyweb3056
@easyweb3056 26 күн бұрын
Excellent content, keep going!
@IdahoMthman
@IdahoMthman 7 ай бұрын
I will have to try this with my X3
@saemranian
@saemranian 9 ай бұрын
Thanks for sharing
@ArcticSeaCamel
@ArcticSeaCamel 9 ай бұрын
Ai että! Hienoa kamaa tulossa. Vielä kun saadaan tuosta tehtyä rakennuksen IFC-komponentit niin avot!
@o0oo888oo0o
@o0oo888oo0o 8 ай бұрын
Thank you
@smiledurb
@smiledurb 11 ай бұрын
very interesting!
@360socialms
@360socialms 8 ай бұрын
Thank you very much for the tutorial!! I have uploaded on Luma web, a 360 video as equirectangular, filmed with the camera always vertically (the video is not walking around an object, it is a free walk through an outdoor space). Luma processes it and creates the NeRF model, but with important noise, cuts and cloud species of noise. In the same way when I create a Reshoot in free form and render, the results are still of poor quality. Do you have any suggestions to improve this? Does the 360 ​​origin video have to have any requirements? Thank you so much !!
@OlliHuttunen78
@OlliHuttunen78 8 ай бұрын
Yes. I have noticed also that Luma does not make so great models from 360 equiretangular images where you just walk straight line. It will create something but Luma is mostly based on circular movement where you move aroud something. But you also should not rely what you can see on the web browser when you rotateng the model in 3D mode. It is only aproximate preview. Much better result will appear if you render some videos out from Luma service. That is when the actual NeRF model can be seen and it is often much better looking than the model which you can see in the Web Browser. Another tip is to download the model into Unreal Game Engine and see how the volume model will look in there. All the other options when you download the model in GLTF, USD or OBJ format thaey will convert the NeRF volume to polygons and it will loose its quality. In polygons the model is not that good. But as for the 360 camera settings I do not have any special tip. Just don't try to upload too long clips where you walk like over 100 meters long route in the video. Luma works best when you have video shot from short area.
@360socialms
@360socialms 8 ай бұрын
@@OlliHuttunen78Thank you very much Olli for the answer. Yes indeed, it seems that Luma responds very well to scanning objects when moving around them, and not in more limeal routes. In my commented case, the video source is very short, only 17 seconds and taken with Ricoh ThetaV camera.The final video with the route animation in the Reshoot and the 3D model (gltf) generated by Luma, both are very bad. I'll keep trying different alternatives, seeing if I can get better results. Your channel is the only one that deals with this important topic. Thank you very much for your help !!
@LaurentEgliAdventure
@LaurentEgliAdventure 8 ай бұрын
Great video thanks for sharing and thanks and congratulations to your partner who puts up with your tests 😂
@jcairetaserra
@jcairetaserra 5 ай бұрын
Very good and interesting video
@lennycecile3775
@lennycecile3775 8 ай бұрын
Hi Olli, great content. I'm curious on if this will work with the insta360 sphere, and what kind of results will you get?
@OlliHuttunen78
@OlliHuttunen78 8 ай бұрын
Sure it works. I have tried that on sphere with my dorne. But it is not that convincing when rendered as a equirectangular image out from Luma AI. But when they get this new Gaussian Splatting method work for 360 images it will be perfect. We just need to wait a little bit because its very new tecnique.
@lennycecile3775
@lennycecile3775 8 ай бұрын
@@OlliHuttunen78 Thank you. Its mind-boggling technology 🔥
@luckybarbieri8533
@luckybarbieri8533 3 ай бұрын
Great info. Thx. Do you think this setup would be good to create a 3D model of a large place, like a church for example?? Or do you recommend another type of setup? Thank you!
@jasoncow2307
@jasoncow2307 8 ай бұрын
hi!i'm wondering,, the video you uploaded is 360 original footage or recuted one side camer footage?
@OlliHuttunen78
@OlliHuttunen78 8 ай бұрын
Yes. I made test with both. The original full equiretangular footage does not make as good result as the video which is cropped from full 360 video. Luma works better if you can go around your target.
@masanoriito
@masanoriito 11 ай бұрын
Please let me know how I can get high quality scans like yours. You mentioned that in the middle of the video, you export the HD video instead of the 360 ​​video and upload it to luma ai. However, in the subsequent scene where the two containers are painted, you used equirectangular video. Which video format would you recommend based on your experience so far? Also, did uploading the insv file directly work for you? I'm using ONE X2, but it doesn't work because it doesn't have leveling function.
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yes. I recommend that you allways edit your material in Insta360 studio. Right now I have had much accurate and better nerf models when I edit the video such way that the target that I shot is in the middle of the picture during the whole video. Then I render it out as a normal MP4 in HD resolution and upload that in the Luma AI service as a Normal video. Second option is to load the full equiretangular video (also in MP4 format). But I have noticed that NeRF trained from equiretangular video do not convert that accurate model as the one where the target is centered. Perhaps I could make another video where I go more deeply in these methods.
@masanoriito
@masanoriito 11 ай бұрын
Thank you for your detailed response. Looking forward to another explainer video. When scanning a place, do you scan the same place over and over again at different heights? Or is it a one time thing?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yes. When I'm scanning I record all at once to one video file. Usually with 360 camera you don't need to make so many walk arounds of your object on different heights because those wide lenses sees most of the surroundings at once. With the selfie stick it is very easy to reach and capture all corners of your object.
@masanoriito
@masanoriito 11 ай бұрын
Gotcha! Thanks a lot!
@pietervandervyver516
@pietervandervyver516 8 ай бұрын
If I take 30 sec with a 360 Does it take up a lot of resolution or memory? B.. I just want to video 4x people next to each other similar to yre car lady Thank you
@resanpho
@resanpho 11 ай бұрын
Hi Oli and thank you for this interesting video. Do i get it right that the objects which are being recorded should be static and the whole thing will not work when you have moving objects? For instance would it be possible to capture a 360 video from a scene in which people dance? I guess not. Thanks
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yes. This scannig method works only with static objects and surroundings. If something moves or passes by (like bike or a car in the background) while you are scanning. AI tries to ignore them and remove from radiance field. It's kind of same effect if you take a photo with a very long exposure time. So you cannot make a very good 3D model with this method from the scene where people are dancing.
@resanpho
@resanpho 11 ай бұрын
​@@OlliHuttunen78 Thank you for response. I was thinking about the ability of 3d modeling important events such as wedding. If every guest play well, once could create a memorable 3D model of the event. :) Another question: Is there a special media player / tool to view the exported 3d Model? Can a normal user easily view the model or needs to install specific and complex tools?
@OlliHuttunen78
@OlliHuttunen78 11 ай бұрын
Yeah! It could work to model that kind of group picture in wedding if everybody can remain in place couple of minutes while you scan the moment with 360 camera. You can easily share a link from Luma AI and people can look rendered NeRF video and rotate 3d model in web browser. It works in mobile and on the computer. You don't have to login or download any kind of special app or plugin for that. And model can be also be embeded to any webpage. Those are the normal features of this kind of cloud service. Luma AI is a great service.
@resanpho
@resanpho 11 ай бұрын
@@OlliHuttunen78 thanks a lot Mate. Need to Test it.
@sujitchachad
@sujitchachad 10 ай бұрын
Thanks for the video. I followed your tips but when I import the model in the blender it just imports small chunk of cropped scene. in Luma Ai i have adjusted the crop to cover the whole geometry but wjen export to .gltf it exports cropped geo. is limitation of free service? I hope I have explained properly.
@OlliHuttunen78
@OlliHuttunen78 10 ай бұрын
Yes. I noticed that luma exports only cropped models right now if you export GLB or OBJ. If you export it to Unreal you will get both versions full model with the backgroud and the cropped one. I quess this need to be asked directly from LumaLabs if they could include the full model also for mesh models.
@TrasThienTien
@TrasThienTien Ай бұрын
🤗🤗🤗
@rockbench
@rockbench 4 ай бұрын
Hi, is the final result download able?
@michael_knight3457
@michael_knight3457 5 ай бұрын
Hello! Can LUMA AI phone scanning software scan a given item in a 1 to 1 ratio? It will know the dimensions of the scanned item, e.g. height, width. I want to model a separate part based on the scanned item that would match the first one. Is it possible?
@tamiopaulalezon9573
@tamiopaulalezon9573 Ай бұрын
what is your pc specs Sir?
@robmulally
@robmulally 7 ай бұрын
Thanks for this video. Time to dust off my 3d camera
@2imtuan
@2imtuan 2 ай бұрын
what is accessory that you used with insta 360 camera ? I saw a connector attach to a rig
@OlliHuttunen78
@OlliHuttunen78 2 ай бұрын
It is a power selfie stick. There is a battery in selfie stick which can give extra power to 360 camera via usb and you can also press record button and control camera from the stick.
@2imtuan
@2imtuan 2 ай бұрын
@@OlliHuttunen78 oh right !! thank you so much mate
@JAYTHEGREAT355
@JAYTHEGREAT355 10 ай бұрын
hello brother , did you shoot a 360 video or where you shooting consting pictures to then upload to luma ai
@OlliHuttunen78
@OlliHuttunen78 10 ай бұрын
I shooted 360 video.
@JAYTHEGREAT355
@JAYTHEGREAT355 10 ай бұрын
@@OlliHuttunen78 thank you brother , i will try to repricate by fallowing youre video , i 3d print so maybe i can scan some figurings and convert them to 3d printable stls . thank you .
@OlliHuttunen78
@OlliHuttunen78 10 ай бұрын
@@JAYTHEGREAT355 I recommend also check out the 3Dpresso web service 3dpresso.ai/. It can also make 3D models from video. They turn out to be much solid and suitaple models for 3D printing than luma ai model. When NeRF model is tornet to polygon model it can be very broken and takes lot of work to make it solid stl for 3D printing.
@tamiopaulalezon9573
@tamiopaulalezon9573 Ай бұрын
Which is better for you, postshot or Luma Ai?
@OlliHuttunen78
@OlliHuttunen78 Ай бұрын
I'd say Postshot because you can train your model more accurate than in Luma AI and you can live preview the process.
@Niberspace
@Niberspace 25 күн бұрын
If this app wasn't cloud based I would have loved to try it, but
@kriptomavi
@kriptomavi 7 ай бұрын
only iphone?
@Hopp5ann
@Hopp5ann Ай бұрын
It has an android app now
@anthonycampbell7843
@anthonycampbell7843 9 күн бұрын
kzbin.info/www/bejne/hpTPqHSChc6kn7M Was your video done before the update to remove the floaters? Or were they still present during your tests at the 6:30 mark?
@OlliHuttunen78
@OlliHuttunen78 8 күн бұрын
My video was made after that Luma AI floaters announcement. But it should be noted that I presented the model in preview mode on Luma's web pages. It doesn't tell the whole truth. The final result of the NeRF model will only appear when the camera animation is rendered. There are often significantly fewer floaters to be seen. But this is quite secondary now that Gaussian Splatting technology has replaced everything and the older 3D models produced with NeRF technology are not talked about very much anymore. In that sense, many things in this video are already outdated information.
@Mateee.01
@Mateee.01 Ай бұрын
if u use a pro iPhone that have lidar sensor the result will be much more detailed than luma ai....
@iarde3422
@iarde3422 4 ай бұрын
I hate it, when people put their feet in dirty shoes on top of seats where other people are going to seat afterwards and make their pants dirty, because of inconsiderate filthy people, that have climbed on the seat with their dirty shoes. If such people don't understand it, then they should be punished for doing this by cleaning the seat every day for a week.
3D scan large areas with Insta360 camera and Cupix Vista
14:30
Olli Huttunen
Рет қаралды 10 М.
How to transform any 360-degree video into 3D (using photogrammetry)
14:24
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 6 МЛН
SHE WANTED CHIPS, BUT SHE GOT CARROTS 🤣🥕
00:19
OKUNJATA
Рет қаралды 6 МЛН
О, сосисочки! (Или корейская уличная еда?)
00:32
Кушать Хочу
Рет қаралды 3,9 МЛН
Insta360 X3 vs Matterport Pro3 3D LiDAR Camera!
6:35
AuthenTech - Ben Schmanke
Рет қаралды 37 М.
Bought a Drone just to create NeRFs with LUMA AI!
6:02
Bad Decisions Studio
Рет қаралды 9 М.
A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets
5:42
Inria/GraphDeco GraphDeco Inria Research Group
Рет қаралды 1,9 М.
11 Filmmaking Shot Ideas - Insta360 One RS 1” 360 Edition
7:02
My observations on Gaussian Splatting and 3D scanning
16:32
Olli Huttunen
Рет қаралды 17 М.
CupixVista - 3D Scan & Site Surveys With 360 Pocket Cameras
15:14
Top 10 Tips For CINEMATIC Insta360 Videos!
10:35
Ben Claremont
Рет қаралды 246 М.
What 3D Gaussian Splatting is not?
8:21
Olli Huttunen
Рет қаралды 89 М.
3D Gaussian Splatting! - Computerphile
17:40
Computerphile
Рет қаралды 99 М.
🐨THEY WANT TO HURT THE KID? #ZOONOMALY 🐨
0:50
CATNAP
Рет қаралды 10 МЛН
小路飞原来不愿意擤鼻涕#海贼王  #路飞
0:20
路飞与唐舞桐
Рет қаралды 10 МЛН
Amazing Speed kick | Classic Kick #martialarts #taikwando #themaster #Karate #kicks
0:14
The Master's Academy of Martial Arts
Рет қаралды 4,7 МЛН