As a noob I just want to see what loss functions in more complex networks look like. Was not dissappointed.
@dshahrokhian5 жыл бұрын
Great Video Summary of all the work in the Maryland lab!
@AoibhinnMcCarthy3 жыл бұрын
Great lecture! Very clear of explaining the influence of loss function from networks
@nguyendinhchung96773 жыл бұрын
Very good and funny videos bring a great sense of entertainment!
@joshuafox17576 жыл бұрын
How much computational power does it cost to evaluate the loss landscape using this method, compared to a more naive method?
@ProfessionalTycoons5 жыл бұрын
Amazing video
@vtrandal5 ай бұрын
@18:00 the speaker, Tom Goldstein is answering a questioin: Is this the whole error surface? His answer contains good news and bad news. It's good news insofar as it's pretty far relative to the weights involved, but it adding skip connections does not convert it to a convex optimization problem. At least that's what I get from the question and the answer.
@dimitermilushev5754 жыл бұрын
Thanks, this is a great video. Do you see any issues/fudamental differences in applying these techniques to sequence models? Is there any research doing so?
@XahhaTheCrimson4 жыл бұрын
This helps me a lot
@DonghuiSun6 жыл бұрын
Interesting research. Does the code have been shared?
@onetonfoot5 жыл бұрын
github.com/tomgoldstein/loss-landscape
@강동흔-r5o2 жыл бұрын
Thank you professor!! I love this video. 38:45 why do we find saddle point? How do we apply saddle point for research?
@aaAa-vq1bd Жыл бұрын
Saddle points identify the points where directions are both upwards and downwards. But why are they useful? Good question. I looked it up: “one of the reasons neural network research was abandoned (once again) in the late 90s was *because the optimization problem is non-convex*. The realization from the work in the 80s and 90s that neural networks have an exponential number of local minima, along with the breakout success of kernel machines, also led to this downfall, as did the fact that networks may get stuck on poor solutions. Recently we have evidence that the issue of non-convexity may be a non-issue, which changes its relationship vis-a-vis neural networks.” What does this mean? Well, say we want to average the values in some neighborhood which is in n-dimensional space. But we can’t just compute the Gaussian kernel because it becomes (potentially exponentially) worse as we go up dimensions. So we need to unfold the manifold to a 2d Euclidean space (a flat coordinate system). What’s the issue? Local minima (areas which look like minima in a restricted region of a function) can get our averaging machine stuck as it applies a stochastic gradient descent algorithm. And there are exponentially many local minima in our neural network, in general, so we are worried that there’s no guarantee of optimization with neural networks at all. Well shit. The thing is though, that the critical points of high-dimensional surfaces for almost all of the trajectory are saddle points, not local minima. Saddle points pose no problem to stochastic gradient descent. And if there is any randomness in our data it’s exponentially likely that all the local minima are close to the global minima. Therefore local minima are not a problem. Basically, saddle points are the highly prevalent critical points in parameter space that don’t pose a problem for the algorithms and architecture we want to use. Local minima do pose a problem but we’ve found that in high dimensions they are only in certain places (near global minima). So you can’t use saddle points in your data for anything special, it’s just that a lot of algorithms (like Newton, gradient descent and quasi-Newton) think saddle points are local minima and thus get stuck much more often than they should. (A side note- there’s something called “saddle-free Newton” which was written about in 2014 but it’s been seen that SGD works just as well without needing to compute a Hessian for a lot of parameters.) Hope that helps a bit.