Thank you Dear Professor for making these available to us. Not only do you make it interesting, but you have a way of explaining at a deep level, making the concepts so much clearer for us to grasp.
@smolboii11832 жыл бұрын
very true
@vatsan164 жыл бұрын
It goes without saying that you are a great teacher. I also like how you always tell the name of the people who invent these algorithms! :) Makes the class a lot more engaging for me
@blainemalachi14033 жыл бұрын
You prolly dont care at all but does anybody know of a way to log back into an instagram account..? I somehow lost the password. I would appreciate any help you can give me
@nicolasachilles5243 жыл бұрын
@Blaine Malachi instablaster ;)
@blainemalachi14033 жыл бұрын
@Nicolas Achilles i really appreciate your reply. I got to the site through google and I'm trying it out atm. Takes quite some time so I will get back to you later with my results.
@blainemalachi14033 жыл бұрын
@Nicolas Achilles It worked and I now got access to my account again. I'm so happy! Thanks so much you saved my account!
@nicolasachilles5243 жыл бұрын
@Blaine Malachi No problem =)
@rezasadeghi44754 жыл бұрын
Wow! That was the most astonishing thing I could ever think I would find on the internet about machine learning. Thank you professor for sharing your deep insight.
@jasemhazbavi Жыл бұрын
خایمالو سگ گا**د
@shrishtrivedi26523 жыл бұрын
This is the best lecture series of all ML lectures.
@puneetjain56254 жыл бұрын
You are such an awesome teacher. I laughed and learned simultaneously. Thanks.
@AnoNymous-wn3fz3 жыл бұрын
+1 some extra laugh in this particular module :D
@abdelmoniemdarwish47734 жыл бұрын
I don't think I have ever wrote any comments on youtube. Thats my first. Actually just wanted to thank you for sharing these amazing lectures, and for your wonderful teaching mythology and explanations
@thirstyfrenchie38723 жыл бұрын
“Boosting brings me to tears sometimes.” “You gotta eat a lot of fruit before the next lecture.” I love you.
@newbie805111 ай бұрын
17:54 volunteers 🤣 Thanks prof for the fun and interesting lecture, got to revise these fundamentals quickly 🙏
@sandipsamaddar875 Жыл бұрын
Hats off Sir, You are truly a great Teacher.
@baohoquoc59825 жыл бұрын
Your lecture is awesome, Sir. It also brings me to tears 38:02
@BrunoSouza-wy2et5 жыл бұрын
beautiful lecture , greetings from Brazil professor
@JoaoVitorBRgomes4 жыл бұрын
34:46, you say bias is not a function of H (your hypothesis), it is a function of the average classifier, that's why your bias is low. Could you also explain it is because when you sum uncorrelated errors, to find the mean classifier, they sum up to zero?
@TheCrmagic5 жыл бұрын
Prof. Weinberger, Thank you for posting your course online, it has been an extremely helpful and an extremely enjoyable learning experience. Will you post those(or future) recitations online in future? As they would add a lot of value by supplementing the lectures, thereby helping online learners like myself get a better understanding. Thank You.
@aragasparyan82953 жыл бұрын
In the definition of out-of-bag error, what we take usually as a loss function while implementing classification via random forest?
@kilianweinberger6983 жыл бұрын
For out of bag error people typically use the squared loss (for regression) or 0/1 loss (for classification).
@aragasparyan82953 жыл бұрын
@@kilianweinberger698 Thanks, that makes sense. One more question concerning the out-of-bag error, you have mentioned it is an unbiased estimate of the test error, I am not sure how to prove that or get some intuition behind that. Could you suggest a reference where I can read about that in more detail?
4 жыл бұрын
Dear prof. Weinberger, first of all, thank you for publishing your lectures. They are awesome! I would like to ask you in which way random forests can be used to perform feature selection since each tree composing the forest does not consider all the features; can you explain in which way the feature are evaluated by looking at the trees? Thank you in advance. Best regards, NG
@abhisheksingla22605 жыл бұрын
Prof. Weinberger, When we use bootstrapping, it will duplicate records. Wouldn't that be a problem while training our model? It's like giving more weights to some records. It might lead to biasness. Also, need your opinion in general on whether we should remove duplicate records in preprocessing or not because of IID assumption which I believe holds for all machine learning algorithm?
@kilianweinberger6985 жыл бұрын
In general, that’s not really a problem, because bootstrapping treats all samples identically- so in expectation they are all over counted about the same. If you are introducing biases, you are simply not using enough bootstraps. There may of course be issues with some algorithm implementations if they assume that all samples are unique...
@JoaoVitorBRgomes4 жыл бұрын
Obrigado! Suas aulas são um presente!
@doloressanchez38912 жыл бұрын
Excellent lecture, thank you very much for uploading and sharing your knowledge.
@connorfrankston55482 жыл бұрын
Very curious as to why sqrt(d) is the standard for the number of sampled features as opposed to something like log(d)
@maddoo232 жыл бұрын
How is the variance calculated for the graph shown in 32:32? Is it just the squared error calculated for some test set (or as shown earlier by the formula at around 27:00)?
@moumniable4 жыл бұрын
Thank you for your great lessons prof ! ( From Morocco)
@Lfmpereira13 ай бұрын
Amazing lecture.
@rakeshkumarmallik15452 жыл бұрын
Hi Professor Killian, You said the estimator in random forest is an unbiased estimator[around 27:00 minute], I am not able to understand why its unbiased , can you explain a bit about the unbaisness. Thanks in advance
@kilianweinberger6982 жыл бұрын
It is unbiased because it never sees any of the leave out data (for each point you only take those trees that were trained without this point in the bootstrapped data set). Because of this, the estimate you are getting is the same that you would obtain if you left the point out completely (as validation) and trained that many trees without it. Hope this helps.
@rakeshkumarmallik15452 жыл бұрын
@@kilianweinberger698 very happy to see u replying me personally Professor. Your reply is definitely helpful.
@AmitKumar-vy3so5 жыл бұрын
thanks sir!wonderful lectures!
@bolinsun95654 жыл бұрын
Really excellent explanation.
@frankysama1003 жыл бұрын
Thanks for the great lecture on random forests!! I have a question regarding training and test errors for this particular algorithm for classification: By design, it seems that the training error (if you'd just refit the trained model on the training data directly) is very very low (~0) - Would it thus be appropriate to use either your OOB error or CV error (if you have the time to do CV) as your training error instead (for which to compare to your test error)? In this regard, if one were to use the OOB error as the model's representative training error to be compared against the test error, would it then be infeasible to compare the training and test errors on most metrics (i.e., precision, recall, f1-score), where only accuracy can be used (because that's how we derived the OOB error)?
@kilianweinberger6983 жыл бұрын
Yes, you can use the OOB error as the validation error (e.g. you can stop adding trees if the OOB error stops declining). So essentially you get a validation error without holding out any data from the training process, which is a nice side effect of bagging.
@omerfarukyasar46816 жыл бұрын
Thanks for this great lecture, very helpful
@Stormdaklak5 жыл бұрын
Thank for share, great lecture
@marialuizacantanhedewuilla73093 жыл бұрын
Do we have access to the projects that you talk about in class? This is the best machine learning course that I found on the internet and I would love to work out on some implementations.
@Vishakh_Patel4 жыл бұрын
Prof. Weinberger, great lecture, couple of questions: 1) can you link material that explain the choice of k = sqrt(d) ? 2) Also in Breiman(2001) he conjectures Adaboost is a random Forest, has there been any advance in that directions? 3) is there a implementation of random forests that are made of ball trees? (I am having a hard time thinking of intuitive substitution for the "Random Feature selection" for this version?
@kilianweinberger6984 жыл бұрын
1) You can find the details in Elements of Statistical Learning Theory ( web.stanford.edu/~hastie/ElemStatLearn/ ) 2) Hmm, sorry not sure 3) Probably not. Ball-trees are really a way to speed up nearest neighbor search, which in the end you don't have to for RF. The analogy is more that they are similar tree structures, but they optimize different things.
@Vishakh_Patel4 жыл бұрын
@@kilianweinberger698 Thank you very much and is this the only course you upload material for?
@mohammadshahadathhossain9814 жыл бұрын
You are a good professor but you could be a great James Bond villain too!
@vijayshankar95294 жыл бұрын
Is alpha a hyper parameter and as far as I understand the boosting is for reducing the bias , so will it lead to overfitting .
@kilianweinberger6984 жыл бұрын
Yes, it is. Boosting does overfit eventually, but in practice is surprisingly resilient against it.
@mimmakutu4 жыл бұрын
I found some of the texts tends to tune tree depth and k or both for random forrest using grid search. Is it a normal practice?
@nguyenhuyanh94245 жыл бұрын
Thank Prof, very helpful lecture.
@mrcoolpiano4 жыл бұрын
Hi Kilian, I have a question on bagging. When sampling our approximate i.i.d. sets D_i why do we not sample with replacement n datapoints and then add to these the datapoints in D. In some sense my question is: would it be sensible to more strongly encode the distribution of D into our datasets D_i much like we did in parameter smoothing? Thanks
@kilianweinberger6984 жыл бұрын
If you sub-sample from the training set with replacement your samples are still from the same distribution as the original training set - just the samples are no longer independent. The intuition really comes from bootstrapping, which is a good way to estimate the variance. If you were to make the samples even closer to the training set you would likely reduce the variance of the various data sets and diminish the variance reduction effect of bagging. Hope this helps ...
@mrcoolpiano4 жыл бұрын
@@kilianweinberger698 I think I understand that variance produced cancels when averaged over many datasets D_i. Thanks for your answer!
@globalSentry Жыл бұрын
Thanks Professor 😊
@rockretroing5 жыл бұрын
Similar question to Uday's question- The recitation by the PhD student expert in Gaussian Processes, that you mentioned in this lecture, would be great to watch
@kilianweinberger6985 жыл бұрын
sorry, I don't think it was recorded. :-(
@mohamedbalabel43704 жыл бұрын
well explained, thanks a lot!
@mia__p4 жыл бұрын
This guy rules
@Chevignay4 жыл бұрын
really you are excellent! big thank you :-)
@vocabularybytesbypriyankgo15584 ай бұрын
Thanks a lot !!
@juliocardenas44854 жыл бұрын
Beautiful!!
@tommgn26643 жыл бұрын
Hi ! About the bias / variance demo for RF kzbin.info/www/bejne/anaydISAnNZ0hbs ? I understand that the bias is constant cause we just do average of more averages when increasing the ensemble size. Thus, does the variance term converges to the bias term when the ensemble size goes to infinity ? I would have said no since the bias is calculated by averaging over all possible training datasets while the variance term is computed on one particular dataset. Is it correct ? Merci beaucoup ! ;)
@kilianweinberger6983 жыл бұрын
You are right about the bias.The variance term does not become the bias term, because h does not converge to the expected label \bar{y}.
@JoaoVitorBRgomes4 жыл бұрын
17:30 Omg lol
@varunjindal15204 жыл бұрын
I have a shitty classifier ..... hahahaha
@born2sly1164 жыл бұрын
So uhh what is this about…
@linxingyao93114 жыл бұрын
Dear Prof. Kilian, you will be ranked with Homer, Virgil, Dante, and Shakespeare in terms of machine learning lectures.