Jon Barron - Understanding and Extending Neural Radiance Fields

  Рет қаралды 64,385

Vision & Graphics Seminar at MIT

Vision & Graphics Seminar at MIT

Күн бұрын

Пікірлер: 42
@mattwillis3219
@mattwillis3219 5 ай бұрын
What an incredible time we live in where one of the authors of the paper can explain it to the masses via a public forum like this! Incredible and mind expanding work guys! Thankyou so much :)
@twobob
@twobob 2 жыл бұрын
popping the link to the videos in the description of the video would make a lot of sense. Enjoyed the nerf paper.
@codebycandle
@codebycandle 4 ай бұрын
...a good reminder to keep up w/ my pytorch studies.
@briandelhaisse1112
@briandelhaisse1112 Жыл бұрын
Very good explanation! Thanks for the talk.
@SafouaneElGhazouali
@SafouaneElGhazouali Жыл бұрын
Very nice work !! keep it up Drs.
@jeffreyalidochair
@jeffreyalidochair Жыл бұрын
a practical question: how do people figure out the viewing angle and position for a scene that's been captured without that dome of cameras? the dome of cameras makes it easy to know the exact viewing angle and position, but what about just a dude with one camera walking around the scene taking photos of it from arbitrary positions? how do you get theta and phi in practice?
@alexandrukis776
@alexandrukis776 11 ай бұрын
These papers usually use COLMAP to estimate the camera position for every captured image for real-world datasets. For the synthetic dataset (e.g. the yellow tractor), they just take the camera positions from Blender, or whatever software they use to render the object.
@hehehe5198
@hehehe5198 11 ай бұрын
very good explanation
@cem_kaya
@cem_kaya 2 жыл бұрын
thanks for sharing this presentation
@Patrick-vq4qz
@Patrick-vq4qz Жыл бұрын
Awesome talk!
@TechRyze
@TechRyze Жыл бұрын
I'm curious to know - when he said at the end that he only has 3 scenes ready to show... considering he mentioned only using 'normal' random public photos - why would this be? Is this related to the computational time required to render the finished product, or for some other reason? If the software works, then surely, give the required amount of time and computational resources, this technique could be used on a potentially infinite number of scenes, using high quality photos sourced online. Is there a manual element to this process that I've missed here, or is the access to the rendering / processing time and resources the limitation?
@kefeiyao7784
@kefeiyao7784 2 жыл бұрын
Great explanation indeed. I have one question: is it ray tracing or ray marching? From the talk, I seemed to find it to be ray marching, but the actual phrasing in the talk was ray tracing.
@masonhawver3577
@masonhawver3577 Жыл бұрын
Marching
@prometheususa
@prometheususa 2 жыл бұрын
brilliant explaination!
@SheikahZeo
@SheikahZeo 2 жыл бұрын
Nerf outputs transparency but all the demo videos seem to only have opaque surfaces. Does it actually work with semi-transparent objects?
@SheikahZeo
@SheikahZeo 2 жыл бұрын
The colour output will be constant along a freely propagating ray. Seems you waste time recomputing the whole network when you really are just interested in the density
@Cropinky
@Cropinky Жыл бұрын
works that come after vanilla nerf deal with opaqueness better than the vanilla nerf does
@ritwikraha
@ritwikraha 2 жыл бұрын
Excellent explanation!!!
@baselomari3657
@baselomari3657 2 жыл бұрын
Glad to see Seth Rogan successful with this career change.
@arcfilmproductions7297
@arcfilmproductions7297 Жыл бұрын
What's the difference between this and the 3d scans you get on an ipad pro? Apart from the fact this looks better. Just trying to get my head around it.
@hanayear
@hanayear 4 ай бұрын
The English subtitles are not in-sync with the video !! someone please help 😭
@sirpanek3263
@sirpanek3263 2 жыл бұрын
Do you see any use for this with drone imagery and fields of crops? This wouldnt work for stitching images im guessing….
@zjulion
@zjulion Жыл бұрын
nice talk. keep going
@yunhokim7846
@yunhokim7846 2 жыл бұрын
This is super helpful Thank you so much
@mirukunoneko1375
@mirukunoneko1375 8 ай бұрын
cc is a bit offset but overall is great!
@theCuriousCuratorML
@theCuriousCuratorML Жыл бұрын
where is that notebook speaker is talking about
@rahulor3773
@rahulor3773 Жыл бұрын
Please provide the link if you have it already.. Thanks in advance!
@darianogina148
@darianogina148 Жыл бұрын
Could you please tell how to make NeRF representation meshable?
@seanchang2876
@seanchang2876 2 жыл бұрын
Hi, I'm just wondering how to know the ground truth RGB color for each (x,y,z) spatial location ?
@wishful9742
@wishful9742 2 жыл бұрын
Hi, You don't need that data. The neural net produces the RGB and alpha for each point along the ray (that was emitted from the pixel along the view direction), then when we have all of ray points RGBA, we can obtain the final pixel RGB color using ray-marching (so all of the parameters along the ray results in the RGB of the pixel). And now we can compare the actual pixel from the obtained pixel and learn from it to produce better parameters along the ray.
@miras3780
@miras3780 Жыл бұрын
@@wishful9742 hi, may I ask how does exactly ray marching work? I am not sure how does MLP know that the scene is occluded at certain distance. Does it also learn sigma values from MLP? Or does the distance to the occluded point calculated from camera intrinsic and extrinsic properties? (I am new to nerf )
@wishful9742
@wishful9742 Жыл бұрын
@@miras3780 ​ Hello, for each point along the ray, MLP predicts the color and the opacity value. The final pixel is simply the weighted sum of colors (weighted by its opacity value). This is one way of raymarching and there are other algorithms of course. please watch 10:35 to 13:50
@melo2722
@melo2722 3 жыл бұрын
@24:42 he says "you can see the relu activations in the image"- what is he pointing to in the image?
@paoloceric6464
@paoloceric6464 3 жыл бұрын
I think he might be referring to the flat areas (which would be the flat part of the relu)
@prbprb2
@prbprb2 7 ай бұрын
Can someone give a link to the Colab discussed around 12:00
@jouweriahassan8922
@jouweriahassan8922 Жыл бұрын
whats the difference between this and photogrammetry?
@anirbanmukherjee5181
@anirbanmukherjee5181 9 ай бұрын
Intuitively the main difference is that photogrammetry tries to build an actual 3D model based on given images, while NeRF model learns what the images from different view points will look like without actually building an explicit 3D model. Not sure about this point, but Nerfs are probably better given a certain number of images
@norlesh
@norlesh Жыл бұрын
45:32 - "were never going to get real time NeRF" and then came Instant-NeRF ... never say never
@崔子藤
@崔子藤 2 жыл бұрын
I like it😃
@mattnaganidhi942
@mattnaganidhi942 Жыл бұрын
Noice 👍
@prathameshdinkar2966
@prathameshdinkar2966 Жыл бұрын
I hit the 1Kth like!
@jimj2683
@jimj2683 Жыл бұрын
One day these algorithms will be so good that you can simply feed all the photos on the internet (including Google Street View and Google images) and out comes a 3d digital twin of the planet. Fully populated by NPCs and driving cars. Essentially GTA for the entire planet.... With enough compute power there is no reason this will not work when combined with generative AI that fills in stuff that is missing by drawing experience from trillions of images/video/3d capture. Imagine giving a photo to a human 3d artist. He will be able to slowly make the scene in 3d from just the photo by using real world experience he has had. Here is a rule of thumb with AI: Everything a human can do (even if it is super slow), AI will eventually be able to do much much faster. Things are going to speed up a lot from here. Cancer research, alzheimer cures, aging reversal etc... Exciting times.
Seja Gentil com os Pequenos Animais 😿
00:20
Los Wagners
Рет қаралды 32 МЛН
Кәсіпқой бокс | Жәнібек Әлімханұлы - Андрей Михайлович
48:57
L13b Neural Radiance Fields  -- Guest Lecturer Ben Mildenhall
1:04:24
Pieter Abbeel
Рет қаралды 2,2 М.
Tutorial: Radiance Field Models for Photorealistic Rendering
1:24:46
SU Lab UC San Diego
Рет қаралды 1,1 М.
Neural Fields in Visual Computing: Eurographics 2022 STAR
1:15:45
NERFs (No, not that kind) - Computerphile
13:35
Computerphile
Рет қаралды 65 М.
Matthew Tancik: Neural Radiance Fields for View Synthesis
49:11
Andreas Geiger
Рет қаралды 30 М.
An Overview of Neural Radiance Fields [+Discussion]
32:22
Edward Zhang
Рет қаралды 1,9 М.
Vincent Sitzmann: Implicit Neural Scene Representations
56:42
Andreas Geiger
Рет қаралды 11 М.
Real-time rendering of NeRFs with PlenOctrees - Angjoo Kanazawa
32:42
Seja Gentil com os Pequenos Animais 😿
00:20
Los Wagners
Рет қаралды 32 МЛН