It's amazing how you can work for years with a function and still gain new insights about it and improve your intuition. Thanks you so much for these videos.
@alexhartung23694 жыл бұрын
mind blown. this guy is a rare breed. Loving the content
@SamAndHenryPlaz4 жыл бұрын
Finally got a nibble of insight into L1, L2 norms they bring up in Ng's coursera classes. Seems like it would be good to use when you want to prune the size of a network after training. I love the chunks of insights these videos give and the very understandable way they're explained.
@PunmasterSTP2 жыл бұрын
Sparsity? More like “Super explanation that’s gotta be”…one of the best I’ve ever heard on the topic. Thank you so much for making and sharing all of these incredibly high-quality videos!
@sahilchawla24852 жыл бұрын
Best 11 mins spent! Thanks a lot for distinguishing your solution than the rest of the internet!
@chaser273 жыл бұрын
You've convinced me to buy your book based on these videos.
@Eigensteve3 жыл бұрын
Awesome, hope you like it!
@andreygrant86764 жыл бұрын
I don't know why this came up on my suggestions list but was a good explanation! Thank you Steve :)
@TheLuceArs4 жыл бұрын
Same I was literally browsing YT suggestions while waiting a night at the airport and saw one of these videos. Who would've known that this becomes the most useful night in my university years
@pevogam4 жыл бұрын
These are all very visual and well made lectures, thanks a lot for the high quality videos and openly available materials!
@nmerrillvt3 жыл бұрын
Amazing videos. The quality and clarity are above and beyond. Thanks!
@jasleenkaur12082 жыл бұрын
Really best videos...I have been reading the stuff for dictionaries creation from last two week and now I understood the concept
@aantoine58194 жыл бұрын
This was a perfect supplement to a lecture I just watched for class
@cDtag004 жыл бұрын
Visually, l_{2/3} looks like an attractive choice as well, as it approximates l_0 even better than l_1 does. Whats the advantage of l_1 over l_{2/3}?
@NowanIlfideme4 жыл бұрын
I think, like the 0-norm, it's not possible to use convex optimization to find solutions with a norm of p
@cDtag004 жыл бұрын
@@NowanIlfideme Thank you, makes sense... I was indeed thinking of neural networks and other forms of non-linear optimization
@hoaxuan70744 жыл бұрын
It is quite simple to use smoothing to solve compressed sensing. Though I found quite slow. You repeatedly correct with the compressed sensing data, smooth, correct, smooth..... The binomial filter is quite good. These days I would prefer to solve with a neural net in one step. Likely to be much faster.
@proteuswave9 ай бұрын
Great video. Also..your setup is nuts! Writing backward? Mirror image? Does not compute!
@nguyenngocly14844 жыл бұрын
You can train neural networks with sparse mutations. Continuous Gray Code Optimization mutations work well. Each GPU gets the full neural model and part of the training set. Not much data needs to move around, just a sparse mutation list, costs back and accept or reject. A fixed random pattern of sign flips is a simple random projection. If you follow it by a Walsh Hadamard transform you get a much better random projection. The WHT acts as a system of fixed orthogonal dot products. The variance equation for linear combinations of random variables applies to dot products. The CLT applies not just to sums but any combination of adding and subtracting (cf WHT) You can have inside_out neural networks with fixed dot products and parametric (adjustable) activation functions. Ie Fast Transform fixed-filter-bank neural nets. You do a simple random projection before the net to stop the first layer taking a spectrum and do a final fast transform as a pseudo_readout layer. f(x)=x.ai x=0 , i=0 to n-1 is a suitable parametric activation function.
@hashirroshinvaliyaparambil704 жыл бұрын
Hey prof Steve, could you please make videos about Lyapunove stability and especially nonlinear systems
@meganauger3544 жыл бұрын
Sort of a random question, but what would happen if, when using the L1 norm, the s vector was perfectly parallel to one of the sides of the L1 square? Would it return a dense solution, instead of sparse? Would that mean you are in an inappropriate basis for a sparse solution? Thanks!
@viniciusvilela53713 жыл бұрын
I guess that's why he said it often brings a sparse solution, not always...
@aymericmelt80836 ай бұрын
I think you can't solve the problem. The algorithme will not converge. It's a case where the L0 norme is better. But as he said more you have dimensions and more it works and I thinks it's because the probability of the ligne to be align decreased exponantialy with higher dimensions.
@meccamiles78164 жыл бұрын
Does someone have a simple way of describing why the L0 sparse solution is NP complete?
@deez_gainz4 жыл бұрын
Does L1 norm regularization for the neural network layer also promote sparsest weights in the layer?
@SamAndHenryPlaz4 жыл бұрын
I'm wondering this as well. It does seem that way.
@NowanIlfideme4 жыл бұрын
Hi, will you also be covering the mathematical background of the L1 sparsity property? I understand it intuitively, but I haven't found a good intuitive and rigorous mathematical proof/explanation of it.
@EduardoGarcia-tv2fc4 жыл бұрын
Are we gonna get more videos on chapter 3?
@xinyichen43323 жыл бұрын
Hi, I still don’t understand why finding the L0 norm of S is NP hard. Could you please explain a bit? I’d appreciate your help. Thanks!
@Eigensteve3 жыл бұрын
Good question. So computing the L0 norm of a vector is simple: just count all of the non-zero entries. But finding the solution "x" of a system of equations Ax=b where x has the minimum L0 norm is NP hard. You would have to search through all possible sparse vectors (you would try all vectors with only one entry, then all vectors with only 2 entries, then all vectors with only 3 entries), until you find the sparsest solution. This is a combinatorial search that scales like n! (where n is the dimension of x). n! grows way too fast for modern computers to handle.
@mikefischbein32303 жыл бұрын
I keep picturing scenes from Cheers where Norm walks into the bar and everyone shouts "Norm!" Then Sam asks "What'll it be today, Norm? L1 or L2?"
@yasir99093 жыл бұрын
Sir do also have a similar detailed lecture on sparsity introduced by l-infinity norm?
@adityagarg259 Жыл бұрын
What if the line and one-norm diamond becomes parallel?
@adityamrai85084 жыл бұрын
You are simply amazing. Keep inspiring.😎
@praveencv4312 жыл бұрын
Any formal proof that l1 norm minimisation gives the sparsest solution in higher dimensions? I was trying to find a reference for it but unable to get one .
@zrmsraggot2 жыл бұрын
What's the difference between a sparse solution coming up on S1 and one on S2 ?
@yuhuawei17633 жыл бұрын
Excellent explanation! Thank you!
@alegian79344 жыл бұрын
I love how Y variable sounds like "why"
@TheMazyProduction4 жыл бұрын
Is L_infinity norm the most uniform solution?
@umangyadav47874 жыл бұрын
Hey, steve I have question like in lasso we have regularisation term |w| if this is. Change to w^2/1+w^2 ... What change it make to coefficient?
@davidmadsen27614 жыл бұрын
Isn't taxi-cab norm L1? As in the distance a car will have to drive in a grid system
@varunjohnson95704 жыл бұрын
Sir when can we expect continuation videos for Chapter 3?
@johnathancorgan39943 жыл бұрын
Steve, I love how your abbreviation of "solution" is "sol" to nth power 😏
@tcratius17484 жыл бұрын
How do you prove a L0 if you can not compute it? Maths is weird, yet the L1 is handy for a project I'm doing, so I should really be asking, how did you read my mind...👻 ps is it useful for imbalance classes?
@snippletrap4 жыл бұрын
It's computable but not differentiable (since non-convex). Later in the video he conflates "computable" with "tractable", but that is a mistake and not true.
@tcratius17484 жыл бұрын
@@snippletrap hmm, as a fairly poor mathematician studying data science at uni, this all computable, understanding takes a whole new and incredibly sparse vocabulary that seems to be different depending on the authors preference of symbols. Oh well, just keep on keeping on. :)
@snippletrap4 жыл бұрын
@@tcratius1748 Among computer scientists these terms are not ambiguous. "Computable" means it can be computed. Not everything can be computed -- see the Halting Problem for instance. "Tractable" generally means that computation time is not exponential in the size of the input. Hard problems like non-convex optimization are computable but not tractable. A computer will get eventually solve those problems, but it may take centuries to do so.
@tcratius17484 жыл бұрын
@@snippletrap yeap, I guess it didn't help I did biology as my bach. Oh, well enjoy, if you really wish to chat more I can give you my LinkedIn otherwise cheers for the chat.
@DerekWoolverton4 жыл бұрын
Now I have to go searching for the formula for l_{2/3} norm. I can guess what it is, but it seems oddly out of place compared to all the integer bases, so I'm as curious where it originally came up.
@alexeyl224 жыл бұрын
"there are infinitely many asses" 😂😂😂. You said it two times and I concur
@enigmabloom4 жыл бұрын
Shoot, this guy's from UW? I'm doing the master's for EEs there right now lol
@MrAlextorex4 жыл бұрын
He writes backwards. how impressive!
@davidmadsen27614 жыл бұрын
probably mirroring in edit, so this is not actually how Steve would look like in person
@goodgame74743 жыл бұрын
This comment is literally literal !
@zakaryaelkhiyati85634 жыл бұрын
neat
@MrAlextorex4 жыл бұрын
He tells a nice story but his explanation requires a lot of background knowledge so in the end I do not feel very satisfied. It would be useful to add also more detailed tutorial links for further study