Gradient Boosting : Data Science's Silver Bullet

  Рет қаралды 71,813

ritvikmath

ritvikmath

Күн бұрын

Пікірлер: 87
@ew6392
@ew6392 2 жыл бұрын
Man I've discovered your channel and am watching your videos non-stop. No matter which topic, it is ALL as if a stream of light shines and makes it all understandable. You've got a gift.
@zhenwang5872
@zhenwang5872 Жыл бұрын
Agreed! You've got a gift to shine the light over topics.
@sELFhATINGiNDIAN
@sELFhATINGiNDIAN 7 ай бұрын
No
@KameshwarChoppella
@KameshwarChoppella 7 ай бұрын
Non math person here and even i could understand this tutorial. Probably have to see it a couple more times because I'm a bit slow in my 40s now. But you really have a gift. Keep up the good work.
@shnibbydwhale
@shnibbydwhale 3 жыл бұрын
You always make your content so easy to understand. Just the right amount of math mixed with simple examples that clearly illustrate the main ideas of whatever topic you are talking about. Keep up the great work!
@mrirror2277
@mrirror2277 3 жыл бұрын
Hey thanks a lot, was literally just searching about Gradient Boosting today and your explanations have always been great. Good pacing and explanations even with some math involved.
@soroushesfahanian5625
@soroushesfahanian5625 6 ай бұрын
The last part of 'Why does it work?' made all the difference.
@javierperezvargas9132
@javierperezvargas9132 6 ай бұрын
totally agree
@hameddadgour
@hameddadgour 9 ай бұрын
This is a fantastic video. Thank you for sharing!
@ritvikmath
@ritvikmath 9 ай бұрын
Glad you enjoyed it!
@jiangjason5432
@jiangjason5432 Жыл бұрын
Great video! A bonus for using squared error loss (which is commonly used) as the loss function for regression problems: the gradient of squared error loss is just the residual! So each weak learner is essentially trained on the previous residual, which makes sense intuitively. (I think that's why each gradient is called "r"?)
@samirkhan6195
@samirkhan6195 3 ай бұрын
Yeah, squared error is easily differentiable compared to others like root squared error, and is not dependent upon number of observation like mean squared error or root mean squared error does , if you want gradient exactly equal to residual , you can choose to take (1/2)(squares error) as loss function.
@nikhildharap4514
@nikhildharap4514 2 жыл бұрын
you are awesome man! I just love coming back to your videos every time. they are just the right length, and the perfect depth.. Kudos!
@arjungoud3450
@arjungoud3450 2 жыл бұрын
Man U r the 5th person, none has explained as simple and clear as you, thanks a ton
@АннаПомыткина-и8ш
@АннаПомыткина-и8ш 3 жыл бұрын
Your videos on data science are awesome! They help me to prepare for my university exam a lot. Thank you very much!
@pgbpro20
@pgbpro20 3 жыл бұрын
I worked on this 5(?) years ago, but needed a reminder - thanks!
@MiK98Mx
@MiK98Mx 2 жыл бұрын
incredible video, you make understandable a really hard concept. Keep teaching like this and big things will come!
@alicedennieau5459
@alicedennieau5459 2 жыл бұрын
Completely agree, you are changing our lives! Cheers!
@luismikalim2535
@luismikalim2535 Жыл бұрын
Thanks for the effort u put in to help ur watchers understand, it really helped me understand the concept behind gradient descent!
@jakobforslin6301
@jakobforslin6301 2 жыл бұрын
You're an amazing teacher, thanks a lot from Sweden!
@Sanatos98
@Sanatos98 2 жыл бұрын
Pls don't stop making these videos
@chau8719
@chau8719 16 күн бұрын
Waw thank you so much for this amazingly clear video explanation 🤗!!! Instantly subscribed :)
@honeyBadger582
@honeyBadger582 3 жыл бұрын
Great video as always! I would love If you could build on that video and talk about XGBoost and math behind it next!
@Andres186000
@Andres186000 3 жыл бұрын
Thanks for the video, also really like the whiteboard format
@marcosrodriguez2496
@marcosrodriguez2496 3 жыл бұрын
your channel is criminally underrated. Just one question. You mentioned using linear weak learners, i.e. f(x) is a linear function of x. In this case how would you ever get anything other than a linear function after any number of iterations? at the end of the day, you are just adding multiple linear functions. it seems this whole procedure would only make sense, if you pick a nonlinear weak learner.
@Halo-uz9nd
@Halo-uz9nd 3 жыл бұрын
Phenomenal. Thank you again for making these videos
@jonerikkemiwarghed7652
@jonerikkemiwarghed7652 2 жыл бұрын
You are doing a great job, really enjoying your videos.
@adityamohan7372
@adityamohan7372 10 ай бұрын
Finally understood it really well, thanks!
@garrettosborne4364
@garrettosborne4364 2 жыл бұрын
Best boosting definition yet.
@joachimheirbrant1559
@joachimheirbrant1559 Жыл бұрын
thanks man you explain it so much better than my uni professor :)
@ritvikmath
@ritvikmath Жыл бұрын
Glad to hear that!
@Ranshin077
@Ranshin077 3 жыл бұрын
Very awesome, thanks for the explanation 👍
@grogu808
@grogu808 Жыл бұрын
Unbelievable variety of topics in this channel! What is your daily job? You have an amazing amount of knowledge
@MiladDana-b7h
@MiladDana-b7h 5 ай бұрын
that was very clear and useful, thank you
@markus_park
@markus_park Жыл бұрын
Thank you so much! You just blew my mind
@ritvikmath
@ritvikmath Жыл бұрын
You're very welcome!
@dialup56k
@dialup56k Жыл бұрын
well done - gee there is something to be said about a good explanation and a whiteboard. Fantastic explanation.
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@rajrehman9812
@rajrehman9812 2 жыл бұрын
Can mathematics behind ML be less dreadful and more fun? Well yes, if we have a tutor like him... amazing explanation ❤️
@benjaminwilson1345
@benjaminwilson1345 Жыл бұрын
Perfect, really well done!
@ritvikmath
@ritvikmath Жыл бұрын
Thanks!
@GodeyAmp
@GodeyAmp 7 ай бұрын
Great video brother.
@domr.2694
@domr.2694 2 жыл бұрын
Thank you for this good explanation.
@bassoonatic777
@bassoonatic777 3 жыл бұрын
Excellently explained. I was just reviewing this and was very helpful to see how someone else think through this.
@parthicle
@parthicle 2 жыл бұрын
You're the man. Thank you!
@Sam-uy1db
@Sam-uy1db 11 ай бұрын
So so well explained
@estebanortega3895
@estebanortega3895 2 жыл бұрын
Amazing video. Thanks.
@ИльдарАлтынбаев-г1ь
@ИльдарАлтынбаев-г1ь 8 ай бұрын
Man, you are amazing!
@ianclark6730
@ianclark6730 3 жыл бұрын
Love the videos! Great topic
@sophia17965
@sophia17965 Жыл бұрын
Thanks! great videos.
@sohailhosseini2266
@sohailhosseini2266 Жыл бұрын
Thanks for sharing!
@EW-mb1ih
@EW-mb1ih 3 жыл бұрын
let's talk about the first word in gradient boosting..... boosting :D Nice video as always
@7vrda7
@7vrda7 2 жыл бұрын
great vid!
@ganzmit
@ganzmit 4 ай бұрын
nice video series
@Artem_Vashina
@Artem_Vashina 19 күн бұрын
Hi! Why do we use f2(x) instead of raw r1_hat? I mean why to make predictions of residuals and use them if we already have the exact value of gradient ?
@KevinGodfreyVerpula
@KevinGodfreyVerpula Ай бұрын
one question , in Step 3 , is your target variable , the gradient with respective to the previous prediction? if so , dont you think there is a possibility of it becoming infinity and we try to fit something to infinity?
@zAngus
@zAngus 9 ай бұрын
Thumbs up for the pen catch recovery at the start.
@ritvikmath
@ritvikmath 9 ай бұрын
😂
@kaustabhchakraborty4721
@kaustabhchakraborty4721 Жыл бұрын
Just asking that is the concept of gradient Boosting similar to Taylor Series functions. Each term is not very good at predicting the function but as u add more functions(terms), the approximation to the function gets better.
@Matt_Kumar
@Matt_Kumar 3 жыл бұрын
Any chance your interested in doing a video on EM algorithm intro with a toy example? Love your videos please keep them coming!
@jamolbahromov4440
@jamolbahromov4440 2 жыл бұрын
Hi, thank you for this informative video. I have some problem understanding the graph at 5:27. How do you map out the curve on the graph if you have a single pair of prediction and loss function values. do you create some mesh out of the give pair?
@chocolateymenta
@chocolateymenta 2 жыл бұрын
great video
@jeroenritmeester73
@jeroenritmeester73 2 жыл бұрын
In words, is it correct to phrase Gradient Boosting as being multiple regression models combined, where each subsequent model aims to correct the error that the previous models couldn't account for?
@mitsuinormal
@mitsuinormal Жыл бұрын
Yeiii you are the best !!
@chiemekachinaka5236
@chiemekachinaka5236 3 ай бұрын
Thanks man
@VictorianoOchoa
@VictorianoOchoa Жыл бұрын
are the initial weak learners randomly selected? If so, can this initial random selection be optimized?
@user-xi5by4gr7k
@user-xi5by4gr7k 3 жыл бұрын
Great video! Never seen gradient descent used with the derivative of the loss function with respect to the prediction. Not sure if I understand it 100% but If the gradient were, for example, -1 for ri, would the subsequent weak learner fit a model to -1? Or would the new weak learner fit a model to (old pred -(Learning Rate * gradient))? Would love to see a simple example worked out for 1 or 2 iterations if possible. Thank you! :)
@rickharold7884
@rickharold7884 3 жыл бұрын
Hmmmm v interesting. Something to think about. Thx
@tehgankerer
@tehgankerer 2 ай бұрын
Its not super clear to me how or where the learning rate comes into play here and what its relation to the scaling factor gamma is.
@gayathrigirishnair7405
@gayathrigirishnair7405 Жыл бұрын
Come to think of it, concepts from gradient boosting apply perfectly to less mathematical aspects of life too. Just take a tiny step in the right direction and repeat!
@ritvikmath
@ritvikmath Жыл бұрын
yes love when math reflects life!
@m.badreddine9466
@m.badreddine9466 11 ай бұрын
move on so I can get screenshot 😂. brilliant explanation ,well done
@emirhandemir3872
@emirhandemir3872 5 ай бұрын
The first time I watched this video, I understood shit! Now the second time, I studied the subject and learn more :), it is much more clear now :)
@arjungoud3450
@arjungoud3450 2 жыл бұрын
Can you please make a video on XGBoost and its advantages by comparing. Thank you.
@xmanxman1527
@xmanxman1527 Жыл бұрын
Isn't gradient the partial derivative with respect to feature(xi), not with respect to the prediction(y^)?
@ashutoshpanigrahy7326
@ashutoshpanigrahy7326 2 жыл бұрын
after 4 hrs. of searching in vain, this has truly proven to be a savior!
@adinsolomon1626
@adinsolomon1626 3 жыл бұрын
Learners together strong
@lashlarue7924
@lashlarue7924 Жыл бұрын
Bro, it's late AF and I'm not gonna lie, I'm passing out now, but I'mma DEFINITELY catch this shit tomorrow. 👍
@ritvikmath
@ritvikmath Жыл бұрын
😂 come back anytime
@lashlarue7924
@lashlarue7924 7 ай бұрын
@@ritvikmathWell, it's been a year, but I came back! 😂
@saravankumargowthamv9338
@saravankumargowthamv9338 Жыл бұрын
Very good content but then it would be great if you can stay at the corner allowing us to have a look at the board for us to understand otherwise great session
@ritvikmath
@ritvikmath Жыл бұрын
Thanks for the suggestion !
@regularviewer1682
@regularviewer1682 2 жыл бұрын
Honestly, StatQuest has a much better way of explaining this. First he explains the logic by means of an example and then he explains the algebra afterwards. I'd recommend his videos on gradient boosting for anyone who didn't understand this. Without having seen his videos on it I would have been unable to understand the algebra.
@SimplyAndy
@SimplyAndy 2 жыл бұрын
Ripped...
@sharmakartikeya
@sharmakartikeya 3 жыл бұрын
Hello Ritvik, are you on LinkedIn? Would love to connect with you!
@NedaJalali-tz7vw
@NedaJalali-tz7vw 2 жыл бұрын
That was amazing. Thanks a lot.
Inverse Transform Sampling : Data Science Concepts
10:54
ritvikmath
Рет қаралды 60 М.
Random Forests : Data Science Concepts
15:56
ritvikmath
Рет қаралды 49 М.
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН
REAL or FAKE? #beatbox #tiktok
01:03
BeatboxJCOP
Рет қаралды 18 МЛН
Gradient Boost Part 1 (of 4): Regression Main Ideas
15:52
StatQuest with Josh Starmer
Рет қаралды 853 М.
Boosting - EXPLAINED!
17:31
CodeEmporium
Рет қаралды 51 М.
Gradient Descent, Step-by-Step
23:54
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Bayesian Linear Regression : Data Science Concepts
16:28
ritvikmath
Рет қаралды 86 М.
AdaBoost : Data Science Concepts
12:26
ritvikmath
Рет қаралды 19 М.
Gradient Boost Part 2 (of 4): Regression Details
26:46
StatQuest with Josh Starmer
Рет қаралды 305 М.
XGBoost Made Easy | Extreme Gradient Boosting | AWS SageMaker
21:38
Prof. Ryan Ahmed
Рет қаралды 41 М.
Why You Shouldn't Trust Your ML Models (...too much)
16:07
ritvikmath
Рет қаралды 6 М.
UFC 310 : Рахмонов VS Мачадо Гэрри
05:00
Setanta Sports UFC
Рет қаралды 1,2 МЛН