Machine Learning Lecture 21 "Model Selection / Kernels" -Cornell CS4780 SP17

  Рет қаралды 26,794

Kilian Weinberger

Kilian Weinberger

Күн бұрын

Lecture Notes:
www.cs.cornell.edu/courses/cs4...
www.cs.cornell.edu/courses/cs4...

Пікірлер: 63
@venugopalmani2739
@venugopalmani2739 5 жыл бұрын
Professor , I am at loss for words at how clearly you go about explaining fairly complex concepts. I understand bias-variance tradeoff might seem rudimentary to people who skim ML at the surface but it is much more complex than that. As someone working in the field of data science at a fairly large company, I find myself with many problems in the models I build ....what do I do? Turn to Prof. Weinberger of course. Absolutely the right balance between capturing the complexity of the topic while keeping it concise. And amazing that he makes it all available for free.
@hdang1997
@hdang1997 4 жыл бұрын
Does your company take ML interns, especially the ones who have also learnt ML concepts from professor kilian?
@sushmithavemula2498
@sushmithavemula2498 5 жыл бұрын
This lecture is so good like some episodes of GOT (with a mesmerizing ending) !
@kilianweinberger698
@kilianweinberger698 5 жыл бұрын
hopefully less violent ... :-)
@minhtamnguyen4842
@minhtamnguyen4842 4 жыл бұрын
Much better than the whole GOT season 8. Thank you so much professor, whenever i think im about to give up my ML self-studying, i found amazing teacher like you.
@kc1299
@kc1299 3 жыл бұрын
are you implying how terrible it will become?
@RanjithKumar-vu4og
@RanjithKumar-vu4og 2 жыл бұрын
@@kilianweinberger698 Savvy
@Saganist420
@Saganist420 5 жыл бұрын
This is absolutely fantastic that you provide this high quality course for free, for everyone. I hope this trend continues and expands in all fields of science. Thank you
@rorschach3005
@rorschach3005 4 жыл бұрын
The series is seriously impressive. It would be nice if we could have a subsequent lecture series on Deep learning and one on Machine learning core theory
@aaronzhao1379
@aaronzhao1379 2 жыл бұрын
Currently taking Machine learning at Columbia in person and taking Machine learning at Cornell online at the same time! The reason why I'm doing this is Professor Weinberger is soooooooo amazing
@sakshamkiroriwal3234
@sakshamkiroriwal3234 Жыл бұрын
Professor, Your lectures help students from bachelors to PhD. Truly amazing way of explaining things. Thank you for helping out students all over the world.
@foreverteuk
@foreverteuk 4 жыл бұрын
The part where he was saying moving your stuff to the window is so humorous >
@ayush5234
@ayush5234 2 жыл бұрын
This is an amazing thing you have given us. Makes me REALLY UNDERSTAND THE "GOOD STUFF" and encourages to learn more. I Hope I get the chance to meet you someday professor.lots of love from india💕
@sandeepreddy6295
@sandeepreddy6295 4 жыл бұрын
I watched 3 lectures till now in the playlist; all are of high-quality content with clear explanation.
@niraj5582
@niraj5582 6 ай бұрын
This is all amazingly good stuff. Thanks Professor!
@HLi-pc4km
@HLi-pc4km 2 жыл бұрын
This channel definitely deserves more subs, the depth and coverage of this course are the best I have ever seen.
@yannickpezeu3419
@yannickpezeu3419 3 жыл бұрын
Best lectures in machine learning on the internet
@jachawkvr
@jachawkvr 4 жыл бұрын
+1 for the awesome demo for kernelization!
@bhavindhedhi
@bhavindhedhi 4 жыл бұрын
lecture starts at 5:06
@hdang1997
@hdang1997 4 жыл бұрын
What a brilliant man!
@logicboard7746
@logicboard7746 4 жыл бұрын
kernal @34:00
@arihantjain8156
@arihantjain8156 Жыл бұрын
Thank you my man!!! I was looking for this comment.
@jenishah9825
@jenishah9825 2 жыл бұрын
I thought I have studied these concepts many times.. But I watched till the end and learnt different ways to look at the same things.
@evangelos-iasonmoschos3632
@evangelos-iasonmoschos3632 4 жыл бұрын
The simulation at the end was very impressive. Thank you for that :)
@rajeshs2840
@rajeshs2840 4 жыл бұрын
Oh ya its good stuff... Prof its amazing .. Thank you very much... (Thank you)*Kilian.... kilian is a very large number which tends to infinity...
@zelazo81
@zelazo81 4 жыл бұрын
Awesome stuff, thank you!
@madeforu7180
@madeforu7180 3 жыл бұрын
Kernal 34:00
@philipghuPride
@philipghuPride 2 жыл бұрын
Kernel starts at 32:40
@chaowang3093
@chaowang3093 3 жыл бұрын
Amazing demo
@aayushchhabra7465
@aayushchhabra7465 4 жыл бұрын
Hey Kilian, You said that doing regularization by changing the value of lambda is more expensive because you have to retrain from scratch for every lambda. Why not set regularization to a very high number and train the model, and then slowly decrease lambda (which means increasing the size of the ball) and then for the new lambda, initialize the weight vector to the weight vector found in last iteration(and not to 0). This would be equally fast to early stopping. Btw, amazing lectures. Thanks for the great work. Aayush Chhabra Computer Engineering, University of Washington
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Yes, those two methods are essentially (roughly) equivalent. If you have a small enough learning rate you will increase the norm of the weight vector a tiny bit with each gradient step - similar to lowering your regularization constant.
@eliasboulham
@eliasboulham Ай бұрын
thank you professor .
@sandeshhegde9143
@sandeshhegde9143 5 жыл бұрын
Kernels starts from: kzbin.info/www/bejne/l2jGoJmje8yqf80
@nassimhaddam7136
@nassimhaddam7136 2 ай бұрын
Thank you for this conference, I learned a lot! I do have one question though. In the course, it is shown how to diagnose the ML model to balance the bias/variance trade-off, but what about noise? How is it possible to know if the error of the model comes from significant noise in the dataset?
@shivu.sonwane4429
@shivu.sonwane4429 2 жыл бұрын
Happy teacher's day sir 🎉
@rohit2761
@rohit2761 2 жыл бұрын
If i ask this many doubts that these students ask this wonderful Professor, My professor would beat the sh*t out of us and get irritated. Kilian Is Gold on KZbin.
@sudhanshuvashisht8960
@sudhanshuvashisht8960 4 жыл бұрын
In k-fold cross validation,how is the test error we're calculating an unbiased estimator of Generalization error (E(h_d(x) -y)^2) . To my understanding, Generalization error should be computed as follows: Draw D training points, learn a model h_D, draw a test point (x,y), make prediction using h_D and compute error. And then we do this step (drawing another D points from same distribution -> learning h_d and making prediction on new test point drawn) millions of times and average this error that we got each time. However, in k-fold validation, the D training points that we have in each iteration (out of k iterations) are going to be very similar and not independent at all (since there is an overlap in data used for training model in each of iteration)
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
You are right that the classifiers (and error estimates) are not independent. But that will only affect the *variance* of the error (because some of the training data is shared, the variance will be lower than if you had taken truly independent samples.) The mean is still unbiased, because each classifier has never seen any of the data it is tested on. Hope this makes sense.
@gauravsinghtanwar4415
@gauravsinghtanwar4415 4 жыл бұрын
How do we calculate standard deviation when using K-fold cross validation? I didn't understand that. Danke Schoen!
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
If you perform K-fold cross validation, you will receive a validation error for each of the K folds. From these K validation errors, you can compute the average, and the standard deviation. The latter is very important, because it will tell you how much your val-error varies if you simply make different train/val splits. If this variance is high please your train/val sets are too small for this type of data. Also expect them to be bad estimates of the test error (i.e. if you do hyper-parameter search based on val, your choice may be pretty bad).
@tonychen31
@tonychen31 4 жыл бұрын
Again, great lecture! Btw, professor you kinda look like Dirk Nowitzki.
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Haha, thanks! He is about 1 foot taller, though. :-)
@devananda1440
@devananda1440 3 жыл бұрын
Sir, in early stopping, we are just increasing number of iterations?, so here there is no lamda at all in this algorithm? if no, how can we say our model generalises overall? and here we only calculate weights for the model and minimise loss function without considering regularisation ?
@vishchugh
@vishchugh 4 жыл бұрын
Hi Kilian , I have a small confusion, why does adding more data increase the training error ? We’re talking about the average error on the dataset right? How does adding more data affect the average error on the trainingdata?
@kilianweinberger698
@kilianweinberger698 4 жыл бұрын
Yes, we are always talking about the average training error. Adding more training data makes it harder to get a low (average) error. Think about the extreme case: If you only have 1 training sample, it is super easy to get zero training error (just always predict the label of that sample). The moment you get more samples, the fitting becomes a lot trickier. Another extreme case is if you have completely random data with random labels. Then you can see that doubling your training data would require you to double your capacity to memorize all those samples. Hope this helps.
@TextTub
@TextTub Жыл бұрын
Why not x1^2 as an increased feature prof?? Timestamp 42:05 !!
@utkarshtrehan9128
@utkarshtrehan9128 3 жыл бұрын
What an ending!
@ritikojha3719
@ritikojha3719 2 жыл бұрын
@Kilian Weinberger at 31:31 what will be the effect of increasing more number of iterations on variance and bias ?
@kilianweinberger698
@kilianweinberger698 2 жыл бұрын
More iterations will primarily increase variance (because you specialize more towards the particular data set you are optimizing over) and might decrease bias a little (e.g. if you go further from a fixed initialization). d
@SalekeenNayeem
@SalekeenNayeem 2 жыл бұрын
Kernels 34:10
@roronoa_d_law1075
@roronoa_d_law1075 4 жыл бұрын
I am curious about the questions at this exam
@sandeshhegde9143
@sandeshhegde9143 5 жыл бұрын
Kernels start from: kzbin.info/www/bejne/l2jGoJmje8yqf80
@sandeshhegde9143
@sandeshhegde9143 5 жыл бұрын
Starts from: kzbin.info/www/bejne/l2jGoJmje8yqf80
@brooklynhe5571
@brooklynhe5571 5 жыл бұрын
Machine Learning Lecture 22 is missed?
@Shkencetari
@Shkencetari 5 жыл бұрын
No, it's not missing. Here it is: kzbin.info/www/bejne/fJi3gnpoftStoq8
@roronoa_d_law1075
@roronoa_d_law1075 4 жыл бұрын
1:35 oh boy
@vatsan16
@vatsan16 4 жыл бұрын
Moral of the story: dont trust your cousin! :P
@doyourealise
@doyourealise 3 жыл бұрын
sir is there any way i can mail you? i have to ask questions for further study, i am looking to do NLP or be an NLP engineer, do i have to learn these algorithms? or i can learn only algorithms which deals with texts , like RNN, transformer, ?
@kilianweinberger698
@kilianweinberger698 3 жыл бұрын
Well, this course covers the underlying principles of machine learning - something all ML algorithms (including RNNs or transformers) are based upon. In general I would recommend to understand the principles, if you intend to use ML algorithms.
Machine Learning Lecture 22 "More on Kernels" -Cornell CS4780 SP17
51:54
Kilian Weinberger
Рет қаралды 22 М.
Machine Learning Lecture 30 "Bagging" -Cornell CS4780 SP17
49:43
Kilian Weinberger
Рет қаралды 24 М.
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 14 МЛН
I CAN’T BELIEVE I LOST 😱
00:46
Topper Guild
Рет қаралды 118 МЛН
WHO LAUGHS LAST LAUGHS BEST 😎 #comedy
00:18
HaHaWhat
Рет қаралды 20 МЛН
Support Vector Machines: All you need to know!
14:58
Intuitive Machine Learning
Рет қаралды 137 М.
Gaussian Processes
23:47
Mutual Information
Рет қаралды 119 М.
16. Learning: Support Vector Machines
49:34
MIT OpenCourseWare
Рет қаралды 1,9 МЛН
The Kernel Trick in Support Vector Machine (SVM)
3:18
Visually Explained
Рет қаралды 243 М.
Lecture 5 "Perceptron" -Cornell CS4780 SP17
49:57
Kilian Weinberger
Рет қаралды 43 М.
Machine Learning Lecture 26 "Gaussian Processes" -Cornell CS4780 SP17
52:41
SVM (The Math) : Data Science Concepts
10:19
ritvikmath
Рет қаралды 96 М.
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 14 МЛН