Tom Goldstein: "What do neural loss surfaces look like?"

  Рет қаралды 19,026

Institute for Pure & Applied Mathematics (IPAM)

Institute for Pure & Applied Mathematics (IPAM)

Күн бұрын

Пікірлер: 15
@hxhuang9306
@hxhuang9306 6 жыл бұрын
As a noob I just want to see what loss functions in more complex networks look like. Was not dissappointed.
@dshahrokhian
@dshahrokhian 5 жыл бұрын
Great Video Summary of all the work in the Maryland lab!
@AoibhinnMcCarthy
@AoibhinnMcCarthy 3 жыл бұрын
Great lecture! Very clear of explaining the influence of loss function from networks
@nguyendinhchung9677
@nguyendinhchung9677 3 жыл бұрын
Very good and funny videos bring a great sense of entertainment!
@joshuafox1757
@joshuafox1757 6 жыл бұрын
How much computational power does it cost to evaluate the loss landscape using this method, compared to a more naive method?
@ProfessionalTycoons
@ProfessionalTycoons 5 жыл бұрын
Amazing video
@vtrandal
@vtrandal 5 ай бұрын
@18:00 the speaker, Tom Goldstein is answering a questioin: Is this the whole error surface? His answer contains good news and bad news. It's good news insofar as it's pretty far relative to the weights involved, but it adding skip connections does not convert it to a convex optimization problem. At least that's what I get from the question and the answer.
@dimitermilushev575
@dimitermilushev575 4 жыл бұрын
Thanks, this is a great video. Do you see any issues/fudamental differences in applying these techniques to sequence models? Is there any research doing so?
@XahhaTheCrimson
@XahhaTheCrimson 4 жыл бұрын
This helps me a lot
@DonghuiSun
@DonghuiSun 6 жыл бұрын
Interesting research. Does the code have been shared?
@onetonfoot
@onetonfoot 5 жыл бұрын
github.com/tomgoldstein/loss-landscape
@강동흔-r5o
@강동흔-r5o 2 жыл бұрын
Thank you professor!! I love this video. 38:45 why do we find saddle point? How do we apply saddle point for research?
@aaAa-vq1bd
@aaAa-vq1bd Жыл бұрын
Saddle points identify the points where directions are both upwards and downwards. But why are they useful? Good question. I looked it up: “one of the reasons neural network research was abandoned (once again) in the late 90s was *because the optimization problem is non-convex*. The realization from the work in the 80s and 90s that neural networks have an exponential number of local minima, along with the breakout success of kernel machines, also led to this downfall, as did the fact that networks may get stuck on poor solutions. Recently we have evidence that the issue of non-convexity may be a non-issue, which changes its relationship vis-a-vis neural networks.” What does this mean? Well, say we want to average the values in some neighborhood which is in n-dimensional space. But we can’t just compute the Gaussian kernel because it becomes (potentially exponentially) worse as we go up dimensions. So we need to unfold the manifold to a 2d Euclidean space (a flat coordinate system). What’s the issue? Local minima (areas which look like minima in a restricted region of a function) can get our averaging machine stuck as it applies a stochastic gradient descent algorithm. And there are exponentially many local minima in our neural network, in general, so we are worried that there’s no guarantee of optimization with neural networks at all. Well shit. The thing is though, that the critical points of high-dimensional surfaces for almost all of the trajectory are saddle points, not local minima. Saddle points pose no problem to stochastic gradient descent. And if there is any randomness in our data it’s exponentially likely that all the local minima are close to the global minima. Therefore local minima are not a problem. Basically, saddle points are the highly prevalent critical points in parameter space that don’t pose a problem for the algorithms and architecture we want to use. Local minima do pose a problem but we’ve found that in high dimensions they are only in certain places (near global minima). So you can’t use saddle points in your data for anything special, it’s just that a lot of algorithms (like Newton, gradient descent and quasi-Newton) think saddle points are local minima and thus get stuck much more often than they should. (A side note- there’s something called “saddle-free Newton” which was written about in 2014 but it’s been seen that SGD works just as well without needing to compute a Hessian for a lot of parameters.) Hope that helps a bit.
@박이삭-m8o
@박이삭-m8o 6 жыл бұрын
can i get pdf file? thks
Watching Neural Networks Learn
25:28
Emergent Garden
Рет қаралды 1,3 МЛН
Geometric Intuition for Training Neural Networks
30:21
Seattle Applied Deep Learning
Рет қаралды 18 М.
Симбу закрыли дома?! 🔒 #симба #симбочка #арти
00:41
Симбочка Пимпочка
Рет қаралды 3,1 МЛН
Tom Goldstein: "An empirical look at generalization in neural nets"
53:03
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 6 М.
Josh Tenenbaum - Scaling Intelligence the Human Way - IPAM at UCLA
1:00:11
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 4,4 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 727 М.
Stanley Osher: "New Techniques in Optimization and Their Applications to Deep Learning..."
34:08
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 2,4 М.
Stéphane Mallat: "Deep Generative Networks as Inverse Problems"
37:10
Institute for Pure & Applied Mathematics (IPAM)
Рет қаралды 5 М.
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 516 М.
Deep Ensembles: A Loss Landscape Perspective (Paper Explained)
46:32
Yannic Kilcher
Рет қаралды 23 М.
12a: Neural Nets
50:43
MIT OpenCourseWare
Рет қаралды 532 М.