Gradient Boosting : Data Science's Silver Bullet

  Рет қаралды 61,996

ritvikmath

ritvikmath

Күн бұрын

A dive into the all-powerful gradient boosting method!
My Patreon : www.patreon.co...

Пікірлер: 84
@ew6392
@ew6392 2 жыл бұрын
Man I've discovered your channel and am watching your videos non-stop. No matter which topic, it is ALL as if a stream of light shines and makes it all understandable. You've got a gift.
@zhenwang5872
@zhenwang5872 Жыл бұрын
Agreed! You've got a gift to shine the light over topics.
@sELFhATINGiNDIAN
@sELFhATINGiNDIAN 4 ай бұрын
No
@soroushesfahanian5625
@soroushesfahanian5625 3 ай бұрын
The last part of 'Why does it work?' made all the difference.
@javierperezvargas9132
@javierperezvargas9132 3 ай бұрын
totally agree
@shnibbydwhale
@shnibbydwhale 3 жыл бұрын
You always make your content so easy to understand. Just the right amount of math mixed with simple examples that clearly illustrate the main ideas of whatever topic you are talking about. Keep up the great work!
@KameshwarChoppella
@KameshwarChoppella 3 ай бұрын
Non math person here and even i could understand this tutorial. Probably have to see it a couple more times because I'm a bit slow in my 40s now. But you really have a gift. Keep up the good work.
@mrirror2277
@mrirror2277 3 жыл бұрын
Hey thanks a lot, was literally just searching about Gradient Boosting today and your explanations have always been great. Good pacing and explanations even with some math involved.
@marcosrodriguez2496
@marcosrodriguez2496 2 жыл бұрын
your channel is criminally underrated. Just one question. You mentioned using linear weak learners, i.e. f(x) is a linear function of x. In this case how would you ever get anything other than a linear function after any number of iterations? at the end of the day, you are just adding multiple linear functions. it seems this whole procedure would only make sense, if you pick a nonlinear weak learner.
@jiangjason5432
@jiangjason5432 Жыл бұрын
Great video! A bonus for using squared error loss (which is commonly used) as the loss function for regression problems: the gradient of squared error loss is just the residual! So each weak learner is essentially trained on the previous residual, which makes sense intuitively. (I think that's why each gradient is called "r"?)
@samirkhan6195
@samirkhan6195 14 күн бұрын
Yeah, squared error is easily differentiable compared to others like root squared error, and is not dependent upon number of observation like mean squared error or root mean squared error does , if you want gradient exactly equal to residual , you can choose to take (1/2)(squares error) as loss function.
@nikhildharap4514
@nikhildharap4514 2 жыл бұрын
you are awesome man! I just love coming back to your videos every time. they are just the right length, and the perfect depth.. Kudos!
@АннаПомыткина-и8ш
@АннаПомыткина-и8ш 2 жыл бұрын
Your videos on data science are awesome! They help me to prepare for my university exam a lot. Thank you very much!
@MiK98Mx
@MiK98Mx Жыл бұрын
incredible video, you make understandable a really hard concept. Keep teaching like this and big things will come!
@alicedennieau5459
@alicedennieau5459 Жыл бұрын
Completely agree, you are changing our lives! Cheers!
@hameddadgour
@hameddadgour 5 ай бұрын
This is a fantastic video. Thank you for sharing!
@ritvikmath
@ritvikmath 5 ай бұрын
Glad you enjoyed it!
@honeyBadger582
@honeyBadger582 3 жыл бұрын
Great video as always! I would love If you could build on that video and talk about XGBoost and math behind it next!
@pgbpro20
@pgbpro20 3 жыл бұрын
I worked on this 5(?) years ago, but needed a reminder - thanks!
@arjungoud3450
@arjungoud3450 2 жыл бұрын
Man U r the 5th person, none has explained as simple and clear as you, thanks a ton
@MiladDana-b7h
@MiladDana-b7h 2 ай бұрын
that was very clear and useful, thank you
@ToughLuck808
@ToughLuck808 Жыл бұрын
Unbelievable variety of topics in this channel! What is your daily job? You have an amazing amount of knowledge
@luismikalim2535
@luismikalim2535 Жыл бұрын
Thanks for the effort u put in to help ur watchers understand, it really helped me understand the concept behind gradient descent!
@rajrehman9812
@rajrehman9812 2 жыл бұрын
Can mathematics behind ML be less dreadful and more fun? Well yes, if we have a tutor like him... amazing explanation ❤️
@jakobforslin6301
@jakobforslin6301 2 жыл бұрын
You're an amazing teacher, thanks a lot from Sweden!
@Andres186000
@Andres186000 3 жыл бұрын
Thanks for the video, also really like the whiteboard format
@Matt_Kumar
@Matt_Kumar 2 жыл бұрын
Any chance your interested in doing a video on EM algorithm intro with a toy example? Love your videos please keep them coming!
@Sanatos98
@Sanatos98 Жыл бұрын
Pls don't stop making these videos
@adityamohan7372
@adityamohan7372 7 ай бұрын
Finally understood it really well, thanks!
@jonerikkemiwarghed7652
@jonerikkemiwarghed7652 2 жыл бұрын
You are doing a great job, really enjoying your videos.
@Halo-uz9nd
@Halo-uz9nd 3 жыл бұрын
Phenomenal. Thank you again for making these videos
@Ranshin077
@Ranshin077 3 жыл бұрын
Very awesome, thanks for the explanation 👍
@GodeyAmp
@GodeyAmp 4 ай бұрын
Great video brother.
@markus_park
@markus_park Жыл бұрын
Thank you so much! You just blew my mind
@ritvikmath
@ritvikmath Жыл бұрын
You're very welcome!
@ИльдарАлтынбаев-г1ь
@ИльдарАлтынбаев-г1ь 4 ай бұрын
Man, you are amazing!
@jeroenritmeester73
@jeroenritmeester73 2 жыл бұрын
In words, is it correct to phrase Gradient Boosting as being multiple regression models combined, where each subsequent model aims to correct the error that the previous models couldn't account for?
@domr.2694
@domr.2694 2 жыл бұрын
Thank you for this good explanation.
@joachimheirbrant1559
@joachimheirbrant1559 Жыл бұрын
thanks man you explain it so much better than my uni professor :)
@ritvikmath
@ritvikmath Жыл бұрын
Glad to hear that!
@bassoonatic777
@bassoonatic777 3 жыл бұрын
Excellently explained. I was just reviewing this and was very helpful to see how someone else think through this.
@dialup56k
@dialup56k Жыл бұрын
well done - gee there is something to be said about a good explanation and a whiteboard. Fantastic explanation.
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@garrettosborne4364
@garrettosborne4364 2 жыл бұрын
Best boosting definition yet.
@ganzmit
@ganzmit 28 күн бұрын
nice video series
@EW-mb1ih
@EW-mb1ih 2 жыл бұрын
let's talk about the first word in gradient boosting..... boosting :D Nice video as always
@kaustabhchakraborty4721
@kaustabhchakraborty4721 Жыл бұрын
Just asking that is the concept of gradient Boosting similar to Taylor Series functions. Each term is not very good at predicting the function but as u add more functions(terms), the approximation to the function gets better.
@benjaminwilson1345
@benjaminwilson1345 Жыл бұрын
Perfect, really well done!
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@zAngus
@zAngus 5 ай бұрын
Thumbs up for the pen catch recovery at the start.
@ritvikmath
@ritvikmath 5 ай бұрын
😂
@Sam-uy1db
@Sam-uy1db 8 ай бұрын
So so well explained
@rickharold7884
@rickharold7884 3 жыл бұрын
Hmmmm v interesting. Something to think about. Thx
@estebanortega3895
@estebanortega3895 2 жыл бұрын
Amazing video. Thanks.
@ianclark6730
@ianclark6730 3 жыл бұрын
Love the videos! Great topic
@sophia17965
@sophia17965 Жыл бұрын
Thanks! great videos.
@emirhandemir3872
@emirhandemir3872 2 ай бұрын
The first time I watched this video, I understood shit! Now the second time, I studied the subject and learn more :), it is much more clear now :)
@tobiasfan5407
@tobiasfan5407 2 жыл бұрын
You're the man. Thank you!
@lashlarue7924
@lashlarue7924 Жыл бұрын
Bro, it's late AF and I'm not gonna lie, I'm passing out now, but I'mma DEFINITELY catch this shit tomorrow. 👍
@ritvikmath
@ritvikmath Жыл бұрын
😂 come back anytime
@lashlarue7924
@lashlarue7924 4 ай бұрын
@@ritvikmathWell, it's been a year, but I came back! 😂
@user-xi5by4gr7k
@user-xi5by4gr7k 3 жыл бұрын
Great video! Never seen gradient descent used with the derivative of the loss function with respect to the prediction. Not sure if I understand it 100% but If the gradient were, for example, -1 for ri, would the subsequent weak learner fit a model to -1? Or would the new weak learner fit a model to (old pred -(Learning Rate * gradient))? Would love to see a simple example worked out for 1 or 2 iterations if possible. Thank you! :)
@xmanxman1527
@xmanxman1527 11 ай бұрын
Isn't gradient the partial derivative with respect to feature(xi), not with respect to the prediction(y^)?
@sohailhosseini2266
@sohailhosseini2266 Жыл бұрын
Thanks for sharing!
@mitsuinormal
@mitsuinormal 9 ай бұрын
Yeiii you are the best !!
@m.badreddine9466
@m.badreddine9466 8 ай бұрын
move on so I can get screenshot 😂. brilliant explanation ,well done
@chiemekachinaka5236
@chiemekachinaka5236 18 күн бұрын
Thanks man
@7vrda7
@7vrda7 Жыл бұрын
great vid!
@chocolateymenta
@chocolateymenta 2 жыл бұрын
great video
@arjungoud3450
@arjungoud3450 2 жыл бұрын
Can you please make a video on XGBoost and its advantages by comparing. Thank you.
@jamolbahromov4440
@jamolbahromov4440 2 жыл бұрын
Hi, thank you for this informative video. I have some problem understanding the graph at 5:27. How do you map out the curve on the graph if you have a single pair of prediction and loss function values. do you create some mesh out of the give pair?
@gayathrigirishnair7405
@gayathrigirishnair7405 Жыл бұрын
Come to think of it, concepts from gradient boosting apply perfectly to less mathematical aspects of life too. Just take a tiny step in the right direction and repeat!
@ritvikmath
@ritvikmath Жыл бұрын
yes love when math reflects life!
@VictorianoOchoa
@VictorianoOchoa Жыл бұрын
are the initial weak learners randomly selected? If so, can this initial random selection be optimized?
@adinsolomon1626
@adinsolomon1626 3 жыл бұрын
Learners together strong
@saravankumargowthamv9338
@saravankumargowthamv9338 Жыл бұрын
Very good content but then it would be great if you can stay at the corner allowing us to have a look at the board for us to understand otherwise great session
@ritvikmath
@ritvikmath Жыл бұрын
Thanks for the suggestion !
@ashutoshpanigrahy7326
@ashutoshpanigrahy7326 Жыл бұрын
after 4 hrs. of searching in vain, this has truly proven to be a savior!
@regularviewer1682
@regularviewer1682 2 жыл бұрын
Honestly, StatQuest has a much better way of explaining this. First he explains the logic by means of an example and then he explains the algebra afterwards. I'd recommend his videos on gradient boosting for anyone who didn't understand this. Without having seen his videos on it I would have been unable to understand the algebra.
@BettyBarry-u2m
@BettyBarry-u2m 12 күн бұрын
Martinez Barbara Moore Edward Thompson Maria
@SimplyAndy
@SimplyAndy Жыл бұрын
Ripped...
@sharmakartikeya
@sharmakartikeya 3 жыл бұрын
Hello Ritvik, are you on LinkedIn? Would love to connect with you!
@NedaJalali-tz7vw
@NedaJalali-tz7vw Жыл бұрын
That was amazing. Thanks a lot.
Gradient Boost Part 1 (of 4): Regression Main Ideas
15:52
StatQuest with Josh Starmer
Рет қаралды 813 М.
Will A Guitar Boat Hold My Weight?
00:20
MrBeast
Рет қаралды 201 МЛН
АЗАРТНИК 4 |СЕЗОН 1 Серия
40:47
Inter Production
Рет қаралды 1,4 МЛН
Random Forests : Data Science Concepts
15:56
ritvikmath
Рет қаралды 46 М.
XGBoost Made Easy | Extreme Gradient Boosting | AWS SageMaker
21:38
Prof. Ryan Ahmed
Рет қаралды 38 М.
The ROC Curve : Data Science Concepts
17:19
ritvikmath
Рет қаралды 35 М.
Gradient Boosting Explained | How Gradient Boosting Works?
32:49
SVM (The Math) : Data Science Concepts
10:19
ritvikmath
Рет қаралды 101 М.
Gradient Boosting In Depth Intuition- Part 1 Machine Learning
11:20
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
AdaBoost, Clearly Explained
20:54
StatQuest with Josh Starmer
Рет қаралды 756 М.