Machine Learning Lecture 19 "Bias Variance Decomposition" -Cornell CS4780 SP17

  Рет қаралды 46,600

Kilian Weinberger

Kilian Weinberger

Күн бұрын

Lecture Notes:
www.cs.cornell....

Пікірлер: 79
@thecactus7950
@thecactus7950 5 жыл бұрын
Man, its such a privilege being able to watch stuff like this.
@Biesterable
@Biesterable 5 жыл бұрын
So true
@TrentTube
@TrentTube 4 жыл бұрын
I feel the exact same way. I am constantly humbled and thrilled this is available.
@filippovannella4957
@filippovannella4957 5 жыл бұрын
This man is one of the best professor I have ever seen. Thanks a lot for this lecture series.
@ebiiseo
@ebiiseo 4 жыл бұрын
Your ability to uncover insights behind all those mathematical formulas is superb. I really like the way you teach. Thank you for uploading this
@jorgeestebanmendozaortiz873
@jorgeestebanmendozaortiz873 3 жыл бұрын
Due to the Covid crisis the professors at my University went on a strike for most of the semester, so my ML class got ruined. Fortunately I found your lectures, and I've been following through the last months. I have to say this is the most thorough introductory course to ML that I've found out there. Thank you very much, prof. Killian, for making your lectures available for everyone. You're working towards a freer and better world by doing so.
@tarunluthrabk
@tarunluthrabk 3 жыл бұрын
I searched extensively for good content on Machine learning, and by God's grace I found one! Thank you Prof Weinberger.
@jachawkvr
@jachawkvr 4 жыл бұрын
I was familiar with these concepts before watching this lecture, but now I feel like I actually understand what bias and variance mean. Thank you so much for explaining these so well!
@yuniyunhaf5767
@yuniyunhaf5767 5 жыл бұрын
i cant believe i have reached to this point, and he shaped the way i think about ML, best professor
@juliocardenas4485
@juliocardenas4485 3 жыл бұрын
I’m using what I’ve learned here to try improving people’s lives. I’m a data scientist in healthcare and a former radiology researcher. Thank you for sharing this freely.
@muratcan__22
@muratcan__22 5 жыл бұрын
this video is gold
@deltasun
@deltasun 4 жыл бұрын
that's the clearest exposition of bias-variance decomposition I've ever seen (and i've seen quite a few). by far
@kevinshen3221
@kevinshen3221 2 жыл бұрын
this is absolutely gold. was so confused by reading An Introduction to Statistical Learning because they give no explanation of how they get bias variance tradeoff, and i found this!
@xwcao1991
@xwcao1991 3 жыл бұрын
Thank you prof. Weinberger for making educational fairness to the people from the thrid world countries like me. who can not afford to study in one of the world's class universities like Cornell. Wish you healthy and happy in your entrie life.
@MohamedTarek-vt4lb
@MohamedTarek-vt4lb 8 ай бұрын
This is Amazing! bless you professor Kilian if you read this
@psfonseka
@psfonseka 5 жыл бұрын
This was super helpful for my own classwork. Thank you so much for posting your lectures publicly!
@vatsan16
@vatsan16 4 жыл бұрын
Me: Machine learning is a black box, the math is too abstract, and nothing really makes sense Professor Weinberger: Hold my beer
@jenishah9825
@jenishah9825 2 жыл бұрын
Such videos don't generally come up in YT suggestions. But if you have found it, it is a gold mine!
@rajeshs2840
@rajeshs2840 4 жыл бұрын
Oh Man, Hats off to you efforts.. Its amazing lecture..
@vishnuvardhan6625
@vishnuvardhan6625 5 ай бұрын
Best vedio on Bias-Variance Decomposition ❤
@mateuszjaworski2974
@mateuszjaworski2974 8 ай бұрын
It's like a good action movie, you can't wait enought about what will be next.
@crystinaxinyuzhang3621
@crystinaxinyuzhang3621 4 жыл бұрын
It's such an amazing lecture! I've never thought each trained trained ml model itself as a random variable before and this is really eye opening
@haodongzheng7045
@haodongzheng7045 2 жыл бұрын
Thank you, professor. I feel like that I’ve grown up a little bit after watching your video ;)
@sans8119
@sans8119 4 жыл бұрын
An amazing lecture !! makes things very clear .
@sheikhshafayat6984
@sheikhshafayat6984 3 жыл бұрын
I don’t usually comment anywhere. But I can’t help say a thanks to you. Such a great teaching skill!
@taketaxisky
@taketaxisky 4 жыл бұрын
The way the error is decomposed reminds me of the decomposition of sum of squares in ANOVA into within group SS and between group SS, in a similar calculation
@vishchugh
@vishchugh 4 жыл бұрын
BEST LECTURE ON BIAS VARIANCE TR !!!!!!!!!!!!!!!!
@angelocortez5185
@angelocortez5185 2 жыл бұрын
These videos popped up on my feed. Didn't realize you wrote the MLKR paper as well. Seeing your videos make me wish I took a formal class with you. Thank you for this content Kilian!
@TheAIJokes
@TheAIJokes 3 жыл бұрын
you are one of my favourite teacher sir....love you from india....❤️
@NO_REPLY_ALARM_TOWARD_ME
@NO_REPLY_ALARM_TOWARD_ME Жыл бұрын
I think that the lecturer always give the students several minutes to make clarified themselves even it maybe seems to be trivial in proving step. It maybe seem difficult, but concise to follow. Thanks.
@noblessetech
@noblessetech 4 жыл бұрын
Awesome video playlist love it.
@abhyudayasrinet17
@abhyudayasrinet17 5 жыл бұрын
A really great explanation
@florianwicher
@florianwicher 3 жыл бұрын
It was a little bit slow, but I got it now. Thanks a lot!
@ashraf736
@ashraf736 Жыл бұрын
What a wonderful lecture.
@hanseyye1468
@hanseyye1468 3 жыл бұрын
Thanks Professor Weinberger.I have one question on 23:28, why we use here the joint distribution p(x,y) but not a conditional p(yIx) or p(y)*p(x)
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Because you are drawing x and y randomly, and your data set and algorithm depends on both. You could factor this into first drawing x, then y i.e. P(y|x)P(x), but it really wouldn't change much in the analysis. Hope this helps.
@hanseyye1468
@hanseyye1468 3 жыл бұрын
@@kilianweinberger698 thank you so much
@jordankuzmanovik5297
@jordankuzmanovik5297 4 жыл бұрын
Wonderful!!...Bravo
@janismednieks1277
@janismednieks1277 3 жыл бұрын
"My son is doing that now, he's in second grade." If you're the one teaching him, I believe you. Thanks.
@xmtiaz
@xmtiaz 2 жыл бұрын
This was beautiful.
@StevenSarasin
@StevenSarasin 9 ай бұрын
That means that the noise also depends on the feature set. So that the noise is not necessarily irreducible, if you can find new features to include. In the housing price example you would appear to have a lot of noise if you left out a location variable in the feature x! Interesting. So we have reduced the generalization error to the dependency on D, the variance, will more data improve the situation? the dependency on the feature set, does there exist a feature set that limits the variance on y itself averaged given x? and there is the bias dependency, are we in principle flexible enough to match the true data pattern (linear vs non-linear.)
@immabreakaleg
@immabreakaleg 4 жыл бұрын
17:48 what a boss question wow
@Saganist420
@Saganist420 5 жыл бұрын
My real life dart playing skills have high bias, high variance.
@VijayBhaskarSingh
@VijayBhaskarSingh 2 жыл бұрын
{x1, x2, .. xi} are sample vectors from (X variables)? Or they are function of one variable?
@meenakshisundaram8310
@meenakshisundaram8310 3 жыл бұрын
Thank you very much
@gauravsinghtanwar4415
@gauravsinghtanwar4415 4 жыл бұрын
What is the need to take probability term in the expected test error expression ?
@utkarshtrehan9128
@utkarshtrehan9128 3 жыл бұрын
Enlightenment
@danielsiemmeister5286
@danielsiemmeister5286 2 жыл бұрын
First of all thank you of this very intuitive explanation, Mr. Weinberger! I have some small questions or remarks which aren't 100 % clear to me: - you said, that y (given x) is random. So we want to pick one statistic depending on our goals. In this case you choose the Expectation(y|x). (One could for example choose the median, coudn't we?) However some minutes later you choose the squared loss function as a "nice" choice for regression. Aren't this two sides of the same coin? If I am choosing the squared loss function, then I am picking the E[y|x]? (When I am choosing the absolute value loss function, then I am choosing the median). So this is my first question, are my thoughts right? - How would the proof look like if I am not in the "squared loss / expectation" setting? What would the proof look like for an generic loss function or statistic of y|x? This is my second question. - How would the proof look like if we are going in the regression setting? I think that is pretty much the same question as question 2. Am I right, when I am saying, that if the distribution of y|x is discrete, than I am in a classification setting and if it is continous, than I am in a regression setting? Furthermore, if I am picking the statistic of y|x (or a loss function) in a generic way, then I have a proof for classification and regression problems? I would be very thankful if anyone could answer or comment on my questions! Yours Daniel
@kilianweinberger698
@kilianweinberger698 2 жыл бұрын
Yes, you are right. The math becomes a lot trickier if you don't use the squared loss, but ultimately the principle is the same for pretty much any less function.
@ayushmalik7093
@ayushmalik7093 2 жыл бұрын
Hi Prof High Variance implies overfitting but overfitting has 2 parts, high test error and low training error. how to imply low training error from high variance? High variance in the hd(x) could also be result of gibberish learning by our algo which could leads to high test and training error. IMO low bias and high variance should mean overfitting as in that case model prediction for different datasets will spread around the centre of your dart board.
@sandeshhegde9143
@sandeshhegde9143 5 жыл бұрын
Where is lecture 18? (I don't see it in the playlist)
@Saganist420
@Saganist420 5 жыл бұрын
lecture 18 was an exam, so it was not recorded.
@TrentTube
@TrentTube 4 жыл бұрын
I eventually concluded it was the exam I skipped :D
@amit_muses
@amit_muses 4 жыл бұрын
I have a command on Bayesian probability theorem, total probability theorem, but couldn't understand the symbols the prof used. I could understand that the prof used some concepts of Expectation theory but couldn't understand well. Can someone suggest some material for this part that I can do in a very short period so that I can understand this lecture well
@adiratna96
@adiratna96 2 жыл бұрын
I didn't understand why D and (x,y ) are independent. Anyone can explain why? please. TIA.
@adiratna96
@adiratna96 2 жыл бұрын
Damn, never mind I got it.
@lorenzoappino9158
@lorenzoappino9158 3 жыл бұрын
Killian is my hero
@siddhanttandon6246
@siddhanttandon6246 2 жыл бұрын
Hey Prof i have a question. In this derivation we kinda bounded the risk for a new sample i.e. out of sample risk which is composed of 3 parts. Is their some theory which does the same breakdown of risk on our training set i.e. samples the model has already seen ? I am particularly interested to know if my training loss can ever go to zero.
@kilianweinberger698
@kilianweinberger698 2 жыл бұрын
That depends on your hypothesis class (i.e. what algorithm you are using). Maybe take a look at the lectures on Boosting. AdaBoost is an ensemble algorithm that (given some assumptions) guarantees that the training error will go to zero (if you average several classifiers together).
@ammarkhan2611
@ammarkhan2611 4 жыл бұрын
Hi Professor, Is there a way to get an access to the assignments ?
@pendekantimaheshbabu9799
@pendekantimaheshbabu9799 4 жыл бұрын
Excellent Can we apply bv trade off among different models?ie for e.g. between Linear Regression and polynomial regression comparison?Whether Bold H consists of set of Hypotheses that contain only linear regressors?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Ultimately the BV trade-off exists for all models. However, as far as I know the derivation of this decomposition only falls into place so nicely in a few steps for linear regression.
@roniswar
@roniswar 3 жыл бұрын
Dear Prof', Thank you again for posting this, very useful and interesting!! One question: in a regression setup, why do you call h (the hypothesis function) as "expected classifier"? Is this the common definition, when thinking about regression problem? Thanks!
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
No, it is only in the setting where you consider the training set as a random variable. Under this view, the classifier also becomes a random variable (as it is a function of the training set), and you can in theory compute its expectation. Hope this helps.
@roniswar
@roniswar 3 жыл бұрын
@@kilianweinberger698 Thank you! One other thing that I didn't see that anyone asked. What happens to the The bias-variance tradeoff, that you fully showed for MSE, when the loss function is not an MSE? Is the decomposition still contains exactly those 3 quantities of bias-variance-noise? How do we measure the tradeoff in these case? We do not longer have this convex parabola shape I assume (If you have a good source explaining this issue please refer me to it).
@macc7374
@macc7374 3 жыл бұрын
Hi Professor! Thank you for uploading this video. When we start the derivation by representing expected test error in terms of hD(X) and y, how can we explain the presence of noise? Our assumption is that y is the correct label. So while there is certainly noise in real world examples, given the starting point of the derivation here, should noise be expected to show up?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Keep in mind noise can either be a bad measurement, but it could also be part of the label that you just cannot explain by your representation of x. Imagine I am predicting house prices (y) based on features about a house (x). My features are e.g. number of bedrooms, square footage, age, ... But now the price of a house decreases because a really loud and rambunctious fraternity moves in next door - something that is not captured in my x at all. For this house the price y is now abnormally low. The price is correct, but given your limited features the only way you can explain it is as noise.
@macc7374
@macc7374 3 жыл бұрын
@@kilianweinberger698 thank you
@gaconc1
@gaconc1 3 жыл бұрын
This is a form of the Pythagorean theorem
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Interesting observation!
@logicboard7746
@logicboard7746 2 жыл бұрын
Point @22:00
@logicboard7746
@logicboard7746 2 жыл бұрын
Then @41:00
@taketaxisky
@taketaxisky 4 жыл бұрын
How does overfitting affect the decomposed error terms? Maybe it is not relevant here.
@taketaxisky
@taketaxisky 4 жыл бұрын
Just realize a graph in the lecture notes explains this!
@kc1299
@kc1299 3 жыл бұрын
disappears into some good feeling hahhaa
@bharatbajoria
@bharatbajoria 3 жыл бұрын
Why is there no D at 37:00 in b^2?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
both terms y-bar and y are independent of the training data set D.
@deepfakevasmoy3477
@deepfakevasmoy3477 4 жыл бұрын
24:56 please someone ask some question, I am not ready for war :)
@hohinng8644
@hohinng8644 Жыл бұрын
Everything is excellent except the poor handwritting
Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
52:41
PEDRO PEDRO INSIDEOUT
00:10
MOOMOO STUDIO [무무 스튜디오]
Рет қаралды 7 МЛН
If Barbie came to life! 💝
00:37
Meow-some! Reacts
Рет қаралды 73 МЛН
The Giant sleep in the town 👹🛏️🏡
00:24
Construction Site
Рет қаралды 20 МЛН
Doing This Instead Of Studying.. 😳
00:12
Jojo Sim
Рет қаралды 37 МЛН
The Bias Variance Trade-Off
15:24
Mutual Information
Рет қаралды 15 М.
Bias-Variance Tradeoff : Data Science Basics
12:25
ritvikmath
Рет қаралды 49 М.
Lecture 08 - Bias-Variance Tradeoff
1:16:51
caltech
Рет қаралды 161 М.
Bias-Variance Trade Off
26:18
NPTEL-NOC IITM
Рет қаралды 23 М.
Machine Learning Lecture 30 "Bagging" -Cornell CS4780 SP17
49:43
Kilian Weinberger
Рет қаралды 24 М.
Reconciling modern machine learning and the bias-variance trade-off
18:54
Robert Duke: Why students don't learn what we think we teach
1:19:34
Cornell University
Рет қаралды 21 М.
PEDRO PEDRO INSIDEOUT
00:10
MOOMOO STUDIO [무무 스튜디오]
Рет қаралды 7 МЛН