How do we put together lots of weak models into a STRONG model?
Пікірлер: 51
@rmiliming2 жыл бұрын
I really like how you can clearly explain complicated concepts in as short as 10 mins and just over one whiteboard ! Much better than some other videos which tend to be long and take multiple slides. We are blessed to have you on KZbin!
@michaltrodler195 Жыл бұрын
Finally! One man on the whole KZbin elucidated the subject with eloquence, demonstrating a remarkable command of the English language and skillful execution in his presentation. Deserved a subscribe 🎉
@Matt-tn2on3 жыл бұрын
Man, I really love your videos. I live overseas and it’s always the first thing I watch when I wake up. Thank you for the clean explanations.
@hameddadgour2 жыл бұрын
I finally understood the difference between bagging and boosting by watching your videos. Thank you!
@drsandeepvm5622 Жыл бұрын
Highly involved in your explanation, the comment diverted me to realise the mic wire problem. We're here for the great and simple explanation for any concept of any complexity you provide. Thanks for everything.
@tapanbagchi3835 Жыл бұрын
Ritvik, you are just superb! I am from IIT Kharagpur and I love your crystal clear teaching - Tapan Bagchi
@parhambateni Жыл бұрын
Thanks for your great power in clarifying these concepts.
@oskeeg6192 жыл бұрын
Thank you!! I now have a clear understanding of AdaBoost and can have conversation about it.
@evgenyivanko96809 ай бұрын
you are doing a great job
@sihatafnan5450 Жыл бұрын
That was great explanation
@ritvikmath Жыл бұрын
Glad it was helpful!
@williamfriedeman70782 жыл бұрын
Clear explanation, thank you!
@hahahaYL-h3x5 ай бұрын
Thanks!
@riccardobellide69753 жыл бұрын
Thak you for these videos, so useful
@jijie133 Жыл бұрын
Great video!
@talaasoudalial-bimany66053 жыл бұрын
Thank you for the Good and Clear Explanation; I request a stacking generalisation algorithm video if possible.
@jaivratsingh99668 ай бұрын
excellent
@thomaspavelka7335 Жыл бұрын
Thanks a lot!
@ritvikmath Жыл бұрын
You're welcome!
@lancelotdulak16403 жыл бұрын
Very clear and useful !
@ritvikmath3 жыл бұрын
Glad it was helpful!
@gemon393 жыл бұрын
Amazing video! Could you please explain XGBoost ?
@ritvikmath3 жыл бұрын
good suggestion!
@ajithshenoy55663 жыл бұрын
@@ritvikmath +1 for xgboost please
@hahahaYL-h3x5 ай бұрын
Just a quick comment. Your teaching is brilliant! I learn a lot from you. But I found the organization of topics (the order in the data concept stream) is a little bit confusing.
@OneworldKW6 ай бұрын
I need to practice 😮
@verayan20193 жыл бұрын
omg, so well explained
@ayseyesil153225 күн бұрын
Why did you draw different lengths for 3 and 4? Isn't in the first step a correct prediction yield error of exp(-alpha1) and wrong prediction yield exp(alpha1)? So the correct ones should have the same size, incorrect ones should have larger but same size. And how do you choose alpha values?
@jarrelldunson3 жыл бұрын
Thanks again for a great video. How do you work with scaling alpha constants? a1, a2? How do you recognize which steps need adjusting (e.g., increasing/decreasing the alpha constants, a1, a2, etc.)? Or is this over-fitting?
@TheMarComplex2 жыл бұрын
Thank you!
@ritvikmath2 жыл бұрын
No problem!
@kevinscaria11 ай бұрын
Can someone explain, how is the value of alpha set? In the original implementation it was the log of errors, but since this looks like a generalized formula, is alpha a hyper parameter?
@thespicycabbage3 жыл бұрын
Thx bro!
@ritvikmath3 жыл бұрын
No problem
@Lucifer-wd7gh3 жыл бұрын
Brother I request , please make a video on gradient decent 🙏
@Donaid3 жыл бұрын
Thank you for the video
@bhargav82323 жыл бұрын
Thank you
@yannickpezeu34193 жыл бұрын
Thanks
@n.m.c.58512 жыл бұрын
Should have started commenting for Support a long time Ago
@emuccino3 жыл бұрын
Fastest subscribe
@ccuuttww3 жыл бұрын
Did he just miss how alpha made?
@kdhlkjhdlk3 жыл бұрын
Is adaboost even used for anything anymore, with random forest and gradient boosting being so much better?
@sansin-dev3 жыл бұрын
Only to pass exams.
@phantinh26133 жыл бұрын
What if I have 3 results now? help
@__-de6he2 жыл бұрын
But second learner can correct the previous mistake but make your own. How is this ploblem resolved?
@minhaoling30562 жыл бұрын
You can tune alphas
@__-de6he2 жыл бұрын
@@minhaoling3056 I don't know what is "alphas" 🙂
@cleansquirrel20843 жыл бұрын
Thank you
@cleansquirrel20843 жыл бұрын
Best thumbnail till now
@brave_v Жыл бұрын
Thank you for this awesome video! Is my understanding correct: Gradient boosting assigns all weights to different errors but adaboost assigns different weights to different errors? I'm also confused why Adaboost just uses the exponential loss function where the gradient boosting you can choose whatever loss function you want?