The Difference Between NeRF And Photogrammetry 3D Scan

  Рет қаралды 74,083

Wintor AR tour

Wintor AR tour

Жыл бұрын

Last week, I made my first 3D scan using Polycam. It uses a technology called photogrammetry to generate a 3D model from a series of photos taken at multiple angles. This 3D model can then be used in AR or VR applications, so that’s why it’s so interesting.
Recently a new technology appeared, called NeRF (Neural Radiance Fields) and made a ton of headlines. It’s similar to photogrammetry, because it’s also a way to visualise a 3D scene or object using images as an input. But it differs from photogrammetry a lot.
The main difference between these two technologies is that photogrammetry generates a 3D model with meshes and textures and is stored in a way that traditional 3D tools can use it. So we can use it in 3D animation, games or VR and AR applications.
A NeRF generates a ‘radiance field’, instead of a traditional 3D model. So the way the 3D models are stored, is very different.
NeRF uses machine learning to create this “radiance field”. With this, you can render new viewpoints of an object from new angles, so when moving the 3D model around, it appears to be 3D to your eyes. The Radiance Field has learnt and can guess what an object would look like from any angle and renders the image you see on your screen.
To give an example: 10 years ago we used a series of images with a slider to make an object on a website appear three dimensional. I remember this cool slider on the apple website, to see the iPod touch from multiple angles. When twirling around, it almost seems like a 3D-model right?
But if I want to see the iPod from a different angle that was not captured by any of the pictures, I am out of luck. With NeRF, we can train the machine learning algorithm and then with the Radiance Field that is created, we can generate images to see the iPod from new perspectives too.
I recreated the iPod touch slider at home and used the images in Luma, this was the result. The original iPod slider images did not work, because there is no background.
The nice thing about NeRF is that reflections and light effects can be captured very accurately. Water, glass, and shiny surfaces usually don’t work well with photogrammetry and the traditional 3D model it creates.
Currently, a downside of NeRF is that it’s not easily applied in AR or VR applications yet! However that will improve over time, with better exporting tools and special viewing applications.
For my own experimentation, I used Luma and Polycam. I edited the AR scenes with www.wintor.com
Follow me for more insights about augmented, mixed and virtual reality. Bye!

Пікірлер: 52
@mrspazzout1
@mrspazzout1 Жыл бұрын
Nerf was new to me but looks like it has a lot of room to grow as the ML models get better over time. Very clear explanation, thanks!
@wintorartour
@wintorartour Жыл бұрын
Great to hear Merijn. It’s definitely worth checking out and might be of huge help in some projects. Glad you liked the explanation. Definitely subscribe if you don’t want to miss other topics about AR and VR.
@jimj2683
@jimj2683 Жыл бұрын
One day machine learning will become so good that you could feed it all the photos on the internet and you would get an accurate 3d model of the entire planet with all the people on it (at least those with facebook/instagram images..)
@mario_vasquez_
@mario_vasquez_ Жыл бұрын
after watching a few videos I think I am finally understanding what NeRFs are. from the videos I have seen it is within Luma where it is real-time "3D models" like Unreal, with lighting and reflections
@wintorartour
@wintorartour 11 ай бұрын
It has gotten a lot better since this video!
@utsanda
@utsanda Жыл бұрын
thanks for the explanation
@wintorartour
@wintorartour Жыл бұрын
You’re welcome! Make sure to subscribe of course!
@JasonSipe16
@JasonSipe16 11 ай бұрын
Thanks for this video. It has come a long way in less than a year! Also, Luma Ai has a new UE5 Plugin!
@wintorartour
@wintorartour 11 ай бұрын
Indeed that looks very interesting. Lots has happened since we shot this video
@ozibuyensin
@ozibuyensin Жыл бұрын
i am sorry if this is dumb but does this means we can use nerf created images and volumes to create a more detailed 3d model using the classic photogrammetry method? I am sure we will see more creative uses of it once its open for people to use but other that and potential social media usage, nerf's area of utilization seems pretty narrow compared to photogrammetry.
@wintorartour
@wintorartour Жыл бұрын
I was thinking that too and I believe the exporting tools to get a GLB for example is already doing that. However, photogrammetry algorithms don't work nicely with reflections and such. Therefore, it might not work as expected. One great use case is using Nerfs to create video shots!
@xtraeone5947
@xtraeone5947 6 ай бұрын
​@@wintorartour I think work is already going on with Nerf for video shots .
@benarmony1532
@benarmony1532 Жыл бұрын
Short and informative. Thank you! Great video👍
@wintorartour
@wintorartour Жыл бұрын
Thank you! Keep an eye out for new videos and subscribed to not miss anything :)
@JG-vl4ig
@JG-vl4ig Жыл бұрын
Wow great video!
@wintorartour
@wintorartour Жыл бұрын
Glad you enjoyed it. Have you subscribed also for our future content?
@hardikadoshi3568
@hardikadoshi3568 Жыл бұрын
which one can we use for scanning environments like office or home interior for using in VR?
@wintorartour
@wintorartour Жыл бұрын
Both actually nowadays. But Polycam would be a little easier to light and change later. But luma recently announced a new version which makes it easier to include in VR projects.
@RR-gx4ec
@RR-gx4ec 10 ай бұрын
The main interesting thing about NeRFs is the ability to capture view-dependent lighting (reflections). And then Luma Labs goes "look, you can export NeRFs to your favorite 3D software like Blender and Unreal!" The trick? They never mention that all reflection information is gone once you do that. A waste of time.
@wintorartour
@wintorartour 10 ай бұрын
But after seeing the video, you know why the reflection is gone!
@pixperfect5834
@pixperfect5834 Жыл бұрын
What's that app you used for AR
@wintorartour
@wintorartour Жыл бұрын
To view the AR content we used our own app Wintor. It'll launch in three weeks, but you can get the bèta already by going to wintor.app/
@bensmirmusic
@bensmirmusic Жыл бұрын
will NeRF be able to generate a 3D environment from a midjourney 2d art ?
@wintorartour
@wintorartour Жыл бұрын
I am not sure about that. Probably not.
@juanestrella6975
@juanestrella6975 11 ай бұрын
Thanks!
@wintorartour
@wintorartour 11 ай бұрын
You're welcome!
@Draconic404
@Draconic404 Жыл бұрын
Those two apps seem to not be available on android, what are some alternatives for both types of 3d scanning?
@wintorartour
@wintorartour Жыл бұрын
Interesting that they are not available on Android. I found this online, maybe that will help you: all3dp.com/2/best-3d-scanner-app-iphone-android-photogrammetry/
@Draconic404
@Draconic404 Жыл бұрын
@@wintorartour luma ai is not on android but polycam is, my phone seems to not support it that's why it wasnt on the play store for me
@Draconic404
@Draconic404 Жыл бұрын
@@wintorartour luma ai is not on android but polycam is, my phone seems to not support it that's why it wasnt on the play store for me
@dietrichdietrich7763
@dietrichdietrich7763 Жыл бұрын
Thank You! 3D career 😁 Here I come! #Ikuzo #Yoshi
@levexis
@levexis 6 күн бұрын
woof
@fintech1378
@fintech1378 2 ай бұрын
how can we create 3d asset from a product image? and insert that to an existing video?
@wintorartour
@wintorartour 2 ай бұрын
Creating a 3D model from a single image is really experimental. There are different tools for that, but I haven't found anything that I really like.
@marioharper5688
@marioharper5688 Жыл бұрын
Do you guys use unity software at all?
@wintorartour
@wintorartour Жыл бұрын
Yes we do! It’s a great tool for everything related to AR/VR and game design. You too?
@marioharper5688
@marioharper5688 Жыл бұрын
@@wintorartour No. I just own stock. But I think AR/VR is the future. Good to know developers actually use the software.
@podermanifiesto18
@podermanifiesto18 Жыл бұрын
What’s the name of the app and If I can find this in the IOS App Store.
@wintorartour
@wintorartour Жыл бұрын
I used the following apps: Polycam, LUMA (only on invite) and Wintor AR tours. Polycam and LUMA are only available on iPhone maybe, not sure. Wintor AR Tours is available on any device. Have a great day!
@UgurEnginDeniz
@UgurEnginDeniz 9 ай бұрын
Both nerf and photogrammetry starts from a point cloud. Both can be meshed.
@wintorartour
@wintorartour 8 ай бұрын
But still very different technologies!
@UgurEnginDeniz
@UgurEnginDeniz 8 ай бұрын
Yes. Photogrammetry is like a brute force approach and has been around for quite a while. It will stay so, because NERF's data augmentation is not desired and can be against accuracy in some cases. Both are brilliant technical ideas and implementations@@wintorartour
@nightmisterio
@nightmisterio Жыл бұрын
I wish to get good 3D models with these...
@wintorartour
@wintorartour Жыл бұрын
You should try, they might have become better now!
@SuperAnirock
@SuperAnirock Жыл бұрын
NeRF scans also convert into 3D model and work in AR/VR applications........with better output :)
@wintorartour
@wintorartour Жыл бұрын
It is getting better, but those models cannot be called NeRF anymore. It uses nerf to get there and indeed, with better results nowadays!
@iamvartist
@iamvartist Жыл бұрын
LUMA is not on Android yet right?
@wintorartour
@wintorartour Жыл бұрын
I don’t think so :(
@nickhockings443
@nickhockings443 Жыл бұрын
Err, no. "photogrammetry" means _any_ method that measures things light. NeRF is a new technique of photogrammetry.
@wintorartour
@wintorartour Жыл бұрын
Thanks for the comment. With NeRF you actually don't measure anything. It's basically a trained AI model with the aim to show a 3D representation based in an image sequence as input. As the video tells.
@nickhockings443
@nickhockings443 Жыл бұрын
@@wintorartour what NeRF measures is the radiance field, which is a continuous model of what parts of the volume emit or absorb light. The MLP acts as a fuction approximator, i.e. a convenient way to represent and fit the model, by measuring the photometric error between the observed training images and the predicted images generated by the radiance field that the MLP represents. NB 1) a voxel grid, a depth map, a radiance field, or a collection of smoothed particles are all models, which measure the geometry and properties of a space. NB 2) AI is an application of mathematics. There is no magic (even if it may feel otherwise ; ) .
@DrGeta666
@DrGeta666 23 сағат бұрын
Terrible content
Comparing Top Five 3D Scanner Apps | Photogrammetry VS NeRFs VS LiDAR
9:40
JackIsBuildingKIRI
Рет қаралды 90 М.
What Happens If You Trap Smoke In a Ball?
00:58
A4
Рет қаралды 16 МЛН
РАДУЖНАЯ ГОРКА 🌈😱
00:30
ВИОЛА 🐰
Рет қаралды 3,9 МЛН
Photogrammetry / NeRF / Gaussian Splatting comparison
23:30
Matthew Brennan
Рет қаралды 137 М.
How AI Photogrammetry Is Changing 3D Forever
10:03
bycloud
Рет қаралды 77 М.
ip adapter Version 2 + automask [comfyui workflow tutorial]
9:58
Archilives | Ai | Ue5
Рет қаралды 1,5 М.
3D Scanning Changed Again. NeRFs Are SO Back!
7:06
Creative Tech Digest
Рет қаралды 9 М.
10 Free AI Animation Tools: Bring Images to Life
11:46
Futurepedia
Рет қаралды 1,3 МЛН
Why Some Designs Are Impossible to Improve: Quintessence
33:03
Design Theory
Рет қаралды 68 М.
This is Changing 3D Scanning!!
9:41
InspirationTuts
Рет қаралды 109 М.