Lucid explanation at free of cost . Your passion to make the concept crystal clear is very much evident in your eyes...Hats Off!!!
@shubhamkohli25354 жыл бұрын
Only person who is providing this level of knowledge at free of cost. Really appreciate it .
@iamfavoured91422 жыл бұрын
100 years of blessing for you. You just gained a subscriber!
@YouKnowMe-123y Жыл бұрын
You are helping many of the ML enthusiasts free of cost... Thank you
@yadikishameer95873 жыл бұрын
I never watched your videos but after watching this video I regret for ignoring your channel. You are a worthy teacher and a data scientist.
@aelitata96624 жыл бұрын
I'm in crisis to learn this topic and all I know is y=mx+c. I think this is the clearest one I've watched on youtube. Thank you sooooo much and love your enthusiasm when you tried to explain the confusing parts
@AnkJyotishAaman4 жыл бұрын
This guy is legit !! Hat's off for the explanation!! Loved it sir, Thanks
@tenvillagesahead41923 жыл бұрын
Brilliant. I searched all over the net but couldn't find such an easy yet detailed explanation of Regularization. Thank you very much! Very much considering joining the membership
@harshstrum4 жыл бұрын
Thank You bhaiya. It feels like every mroning when I watch your videos my career slope will increase. Thank you for this explaination.
@auroshisray91404 жыл бұрын
Hats offf...grateful for valuable content at 0 cost
@koderr1002 жыл бұрын
Now I finally got about key L2 and L3 difference. Thanks a lot!
@HammadMalik4 жыл бұрын
Thanks Krish for explaining the intuition behind Ridge and Lasso regression. Very helpful.
@ganeshrao4053 жыл бұрын
Thank you soo much Krish, Linear regression + Ridge + Lasso cleared my concepts with your videos.
@dollysiharath4205 Жыл бұрын
You're the best trainer!! Thank you!
@ChandanBehera-jp2me3 жыл бұрын
i found your free videos better than some other paid tutorials...thanx for ur work
@TheR4Z0R9964 жыл бұрын
Keep up the good work, blessing from italy My friend :)
@marijatosic2174 жыл бұрын
Great video! I appreciate how hard his effort is to help us really understand the material!
@adijambhulkar17422 жыл бұрын
Hats off... What a way... What a way to explain man... Clear...all doubts
@datafuturelab_ssb44332 жыл бұрын
Best explanation on lasso n ridge regression ever on KZbin... Thanks krish... You nailed it...
@vishalaaa14 жыл бұрын
This naik is excellent. He is solving every ones problem.
@vladimirkirichenko1972 Жыл бұрын
This man has a gift.
@mithunmiranda Жыл бұрын
I wish I could like his videos multiple times. You are a great teacher, Kind Sir.
@rishu42255 ай бұрын
Thanks, the enthusiasm with which you teach also carries over to us. 🥰
@mumtahinhabib43144 жыл бұрын
This is where I have found the best explanation of ridge regression after searching a lot of videos and documentations. thank you sir
@ajithsdevadiga16039 ай бұрын
Thank you so much for this wonderful explanation, truly appreciate your efforts in helping the data science community.
@cyborg69420 Жыл бұрын
just wanted to say that I absolutely loved the video
@gerardogutierrez49114 жыл бұрын
if you pause the video and just watch his facial movements and body movements, he looks like hes trying his best to convince you to stay with him during a break up. Then you turn on the audio and its like hes yelling at you to get you to understand something. Clearly, this man is passionate about teaching Ridge regression and knows a lot. I think its easier to follow when hes like checking up on you by saying, you need to understand this, and repeats words and uses his voice to emphasize concepts. I wish he could explain other things to me besides data science.
@TheMrIndiankid4 жыл бұрын
he will explain u the meaning of life too
@MrBemnet14 жыл бұрын
my next project is counting head shakes in a youtube video .
@tanmay27719993 жыл бұрын
@@MrBemnet1 Ngl that actually sounds interesting.
@juozapasjurksa14003 жыл бұрын
Your explanations are sooo clear!
@MuhammadAhmad-bx2rw3 жыл бұрын
Extraordinary talented Sir
@indrasenareddyadulla84904 жыл бұрын
Sir, you have mentioned in your lecture this concept is complicated but never I felt it is so. you have explained very excellent.👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌
@abdulnafihkt42452 жыл бұрын
Best best best bestttttt class...hats off maaan
@fratcetinkaya85383 жыл бұрын
Here is where I understood that damn issue. I’m appreciated too much, thanks my dear friend :)
@aish_waryaaa2 жыл бұрын
Krish Sir you are saving my masters literally,up to date explanation,and the efforts you are putting to help us understand,Thank You so Much Sir.😇🥰
@sincerelysilvia2 жыл бұрын
This is the most clearest and best explanation about this topic on youtube. I can't express how thankful I am for this video for finally understanding the concept
@143balug4 жыл бұрын
Hi Krish, Your are making our confidence more on data scince with the clear explanations
@nehasrivastava89274 жыл бұрын
best tutorials for machine learning with indepth intuition...i think there is no tutorial on utube like this...Thankuu sir..
@loganwalker4543 жыл бұрын
Regularization was a very abstruse and knotty topic. However, after watching this video; it is a piece of cake Thank you, Krish
@BipinYadav-wn1pm Жыл бұрын
after going through tons of videos, finally found the best one, thnx!!
@adinathshelke58278 ай бұрын
perfect explanationnnnnnnn. WAs wandering around for whole day. And at the end of the day, found this one.
@Zizou_20144 жыл бұрын
Brilliantly done! Thanks Krish
@anirbandey89993 ай бұрын
Very good video to understand the intuition behind L1, L2
@rahul2819813 жыл бұрын
Very nicely explained, thank God I found your posts on KZbin while searching the stuff👍
@rayennenounou70653 жыл бұрын
I have a mémoire master 2 about lasso régression i need informations more informations about régression de lasso but in frensh can you help me
@moe45673 Жыл бұрын
Thank you! I thought this was a great explanation (as someone who has listened to a bunch of different ones trying to nail my understanding of this)
@316geek3 жыл бұрын
you make it look so easy, kudos to you Krish!!!
@aravindvasudev7921 Жыл бұрын
Thank you. Now I got a clear idea on both these regression techniques.
@MohsinKhan-rv7jj2 жыл бұрын
The kind of explanation is truly inspirational. I am truly overfitted by knowledge after seeing your video.❤
@abhishekkumar4652 жыл бұрын
Reduce the rate of learning, this may help you as per Ridge regression :P
@JEEVANKUMAR-hf4ex3 жыл бұрын
good explanation without touching any complex maths derivations.
@Sumta5554 жыл бұрын
18:35 How the features get removed where |slope| is very very less? Hats off for this fantastic clarity on the topic.
@dhirajkumarsahu9994 жыл бұрын
The regression works in such a way that, the coefficients of the inputs which are of less importance will keep on decreasing in every iteration. In case of ridge regression, the unimportant coefficients exponentially decrease (or asymptotically decrease). In which it will come very close to zero, but will never become zero. (Look the exponential graph for refrence) This is not the case with lasso regression. Hope this helps.
@JoseAntonio-gu2fx4 жыл бұрын
Muchas gracias por compartir. Se agradece mucho el esfuerzo por aclarar los conceptos que es la base de partida para la resolución. Saludos desde España!
@sridhar74883 жыл бұрын
sí, es un tipo genial ... también me encanta ver sus videos!
@ArunKumar-yb2jn2 жыл бұрын
At 8:15 you say "a steep slope will always lead to overfitting case, why? I will just tell you now..." But I couldn't find where later on you have explained this.
@somnathpatnaik22773 жыл бұрын
i have tried 4 very reputed organizations for doing courses all claim faculty from IIT and xyz high profile name. My feedback is if you are from IIT then that doesnt mean you are a good teacher, for teaching they should have passion like you had. When i see your lectures i enjoy learnings. Thank you
@TheOntheskies3 жыл бұрын
Thank you, for the crystal clear explanation. Now I will remember Ridge and Lasso.
@ZubairAzamRawalakot Жыл бұрын
Very informative lecture dear. You explained with maximum detail. thanks
@yitbarekmirete60982 жыл бұрын
you are awesome, better than our professors in explaining such complex topics.
@BoyClassicall4 жыл бұрын
Concept well explained. I've watch a lot of videos on Ridge regression but most well explained has shown mathematically the effect of lambda on slope.
@Amir-English7 ай бұрын
You made it so simple! Thank you.
@heplaysguitar10903 жыл бұрын
Just one word, Fantastic.
@gandhalijoshi92423 жыл бұрын
Very nice explanation. I have started watching your videos and your teaching style is very nice . Very nice you tube channel for understanding data science-Hats Off!!
@kanhataak12694 жыл бұрын
After watching this lecture is not complicated... good teaching sir
@sahilzele21424 жыл бұрын
so the basic idea is: 1)steeper slope leads to overfitting @8:16 (what he basically means is that the overfitting line we have has a steeper slope which does not justify his statement on the contrary) 2)adding lambda*(slope)^2 will increase the value of cost function for the overfitted line, which will lead to reduction of slopes or 'thetas' or m's (there are all the same things) @10:03 3)now that the value of cost function for overfitted line is not minimum, another best line is selected by reducing the slopes or 'thetas' or m's which will also reflect in addition of lambda*(slope)^2 ,just this time slope added will be less. @13:45 4)doing this will overcome overfitting as the new best fit line will have less variance(more successful for training data) and less bias than our previous line @14:10 , the bias maybe more because it was 0 for overfitted line ,then it will be a bit more for the new line 5)lambda can be also called as scaling factor or inflation rate to manipulate the regularization. as for the question ,what happens if we have overfitted line with less steeper slope?, then i think we'll find the best fit line with even less steep slope(maybe close to slope~0 but !=0) @16:30 and tadaa!!!! we have reduced overfititng successfully!! please correct me if anything's wrong
@faizanzahid4904 жыл бұрын
I've same queries bro.
@supervickeyy15214 жыл бұрын
for 1st point. What if test data has the same slope value as that of train data? in such case there won't be overfitting correct ?
@angshumansarma28364 жыл бұрын
just remember the 4 th point that the main goal of regularization we just wanted to generalize better for the test dateset while having some errors in the test dateset
@chetankumarnaik92934 жыл бұрын
First of all, no linear regression can be built with just two data points. He is not aware of degree of freedom.
@Kmrabhinav5694 жыл бұрын
the basic idea is to use lambda (i.e. also known as the regularization parameter) to reduce the product term of Lambda*(slope). Here slope implies various values of m, such as if y = m1x1+m2x2 and so on... we have many values of m(i). So here, we try to adjust the value of lambda such that, the existence of those extra m(i) doesn't matter. And hence we are then able to remove them, i.e. remove the extra features from the model. And we are doing this as one of the major causes of overfitting is due to the addition of extra features. Hence by getting rid of these features, we can curb the problem of overfitting. Hope this helps.
@SahanPradeepthaThilakaratne5 ай бұрын
Your explanations are superbbb!
@robertasampong Жыл бұрын
Absolutely excellent explanation!
@askpioneer2 жыл бұрын
well explained krish. thank you for creating . great work
@belllamoisiere88772 жыл бұрын
Hello from México. Thank you for your tutorials, they are as if one of my class mates was explaining concepts to me in simple words. A suggestion, please include a short tutorial on ablation of Deep Learning Models.
@aseemjain0074 ай бұрын
Brilliantly explained !! thankyou !!
@ibrahimibrahim67353 жыл бұрын
Thanks, Krish, I want to correct one thing here, the motivation behind the penalty is not to change the slop; it is to reduce the model's complexity. For example, consider the flowing tow models: f1: x + y + z + 2*x^2 + 5y^2 + z^2 =10 f2: 2*x^2 + 5y^2 + z^2 =15 f1 is more complicated than f2. Clearly, a complicated model has a higher chance of overfitting. By increasing lambda (the complexity factor), it is more likely to have a simpler model. Another example: f1: x + 2y + 10z + 5h + 30g = 100 f2: 10z + 30g = 120 f2 is simpler than f1. If both models have the same performance on the training data, we would like to use f2 as our model. Because it is a simpler model and a simpler model has less chance for overfitting.
@prashanths44554 жыл бұрын
Krish An excellent explanation. Thank you so much for this wonderful in-depth intuition.
@dianafarhat94798 ай бұрын
Amazing explanation, thank you!
@anshulmangal27554 жыл бұрын
Sir great channel on KZbin for machine learning
@mohammedfaisal67144 жыл бұрын
Thanks a lot for your Support
@MsGeetha1233 жыл бұрын
Excellent video!!! Thanks for a very good explanation.
@maheshurkude40074 жыл бұрын
thanks for explaining Buddy!
@abhishekchatterjee95034 жыл бұрын
You did a great job sir.... It helped me a lot in understanding this concept. In 20min I understood the basic of this concept. Thank you💯💯
@sandipansarkar92114 жыл бұрын
Great explanation Krish.I think I a understanding a little bit about L1 andL2 regression.Thanks
@Bedivine777angelprayer Жыл бұрын
Thanks is there articles i can refer any blogs you recommend thanks again great content
@kanuparthisailikhith4 жыл бұрын
The best tutorial I have seen till date on this topic. Thanks so much for clarity
@shaurabhsinha41213 жыл бұрын
Krish,but equation of a line with best generalized fit,eg y=Mx+c,can be possible with M being high-->AS ACTUAL DATAPOINT CAN BE closer and crowded near Y axis.So,steep slope can't be criteria.
@yamika.2 жыл бұрын
thank you for this! finally understood the topic
@Mars78222 жыл бұрын
only one word = brilliance
@sameerpandey55613 жыл бұрын
Superbly explained
@gunjanagrawal86262 жыл бұрын
Very well explained!🙌
@bhuvaraga2 жыл бұрын
Loved your energy sir and your conviction to explain and make it clear to your students. I know it is hard to look at the camera and talk - you nailed it. This video really helped me to understand the overall concept. My two cents, 1) Keep the camera focus on the white board I think it is autofocussing between you and the white board and maybe that is why you get that change in brightness also.
@R_SinghRajput2 жыл бұрын
Beautifully explained sir ☺️
@thulasirao91394 жыл бұрын
You are doing awesome job. Thank you so much
@veradesyatnikova29312 жыл бұрын
Thank you for the clear and intuitive explanation! Will surely come in handy for my exam
@fatriantobong Жыл бұрын
i think you need to emphasize on the high variance toward the test data, and low variance toward the training data, the problem with overfitting is that this low variance on the training data comes at the expense of high variance on the test data. When the model is exposed to new, unseen data (the test data), it struggles to generalize because it has essentially memorized the noise and intricacies of the training data. This results in a significant difference between the model's predictions and the true values on the test data, indicating high variance on the test data.
@RinkiKumari-us4ej4 жыл бұрын
Best explanation sir.
@saitcanbaskol98972 жыл бұрын
Amazing explanations.
@nehabalani72903 жыл бұрын
Too good and short for ppl clear with basic modeling concepts
@bhooshan253 жыл бұрын
understood the whole video very well
@tsrnihar2 жыл бұрын
Small correction - For Lasso regression, it is sum of mod of coefficients multiplied by the regularization parameter. You wrote it as mod of sum of coefficients multiplied by the regularization It is lambda*(|m1| + |m2| + ..) and not lambda*(|m1+ m2 + ...)
@unpluggedsaurav3186 Жыл бұрын
True
@binnypatel70614 жыл бұрын
Awesome job.....keep up with the good work!
@kanavsharma95623 жыл бұрын
I have watched more than 8 videos and 2-3 articles but didn't get how lambda value effect the slope ur video explain it best. Thanks
@dineshpramanik25714 жыл бұрын
Excellent explanation sir...thanks
@ahmedaj20003 жыл бұрын
THANK YOU SO MUCH!!!!!!! great explanation!
@_cestd97273 жыл бұрын
super clear, thanks for the video!
@kiran0824 жыл бұрын
Thank you Krish very detailed explanation
@Markmania510 Жыл бұрын
What a great video thank you so much for posting. Can anyone tell me what happens if you’re overfitting with a smaller slope? I.e. you need to increase the slope to generalise the model more. Is this when you need to make the lambda parameter negative?
@taruchitgoyal37359 ай бұрын
Hello Sir, As I understand, we apply Regularization [Ridge and Lasso] to reduce features or reduce contribution of features. As you also shared we took sum of squared residuals and then added the regularization constant to it. Thus, we get cost value which is equal to sum of squared residuals and regularization penalty and we get a numerical result as the output of the computation. At this point, How do we perform the feature selection or control contribution of each input variable in feature space? Thank you
@MrLoker1213 жыл бұрын
Good video for beginners, a couple of pointers though: 1. The lasso regression would lead to |m1|+|m2|+|m3| +.... and not |m1+m2+m3+m4....| 2. The explanation why coefficients in L1 regularization would go to zero and not for L2 is missing. Probably can expand upon it theoretically.