Tutorial 27- Ridge and Lasso Regression Indepth Intuition- Data Science

  Рет қаралды 331,571

Krish Naik

Krish Naik

Күн бұрын

Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
/ @krishnaik06
#Regularization
⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! www.kite.com/g...
Please do subscribe my other channel too
/ @krishnaikhindi
Connect with me here:
Twitter: / krishnaik06
Facebook: / krishnaik06
instagram: / krishnaik06

Пікірлер: 406
@hipraneth
@hipraneth 4 жыл бұрын
Lucid explanation at free of cost . Your passion to make the concept crystal clear is very much evident in your eyes...Hats Off!!!
@shubhamkohli2535
@shubhamkohli2535 4 жыл бұрын
Only person who is providing this level of knowledge at free of cost. Really appreciate it .
@yadikishameer9587
@yadikishameer9587 3 жыл бұрын
I never watched your videos but after watching this video I regret for ignoring your channel. You are a worthy teacher and a data scientist.
@aelitata9662
@aelitata9662 4 жыл бұрын
I'm in crisis to learn this topic and all I know is y=mx+c. I think this is the clearest one I've watched on youtube. Thank you sooooo much and love your enthusiasm when you tried to explain the confusing parts
@iamfavoured9142
@iamfavoured9142 2 жыл бұрын
100 years of blessing for you. You just gained a subscriber!
@YouKnowMe-123y
@YouKnowMe-123y Жыл бұрын
You are helping many of the ML enthusiasts free of cost... Thank you
@AnkJyotishAaman
@AnkJyotishAaman 4 жыл бұрын
This guy is legit !! Hat's off for the explanation!! Loved it sir, Thanks
@tenvillagesahead4192
@tenvillagesahead4192 3 жыл бұрын
Brilliant. I searched all over the net but couldn't find such an easy yet detailed explanation of Regularization. Thank you very much! Very much considering joining the membership
@harshstrum
@harshstrum 5 жыл бұрын
Thank You bhaiya. It feels like every mroning when I watch your videos my career slope will increase. Thank you for this explaination.
@sahilzele2142
@sahilzele2142 4 жыл бұрын
so the basic idea is: 1)steeper slope leads to overfitting @8:16 (what he basically means is that the overfitting line we have has a steeper slope which does not justify his statement on the contrary) 2)adding lambda*(slope)^2 will increase the value of cost function for the overfitted line, which will lead to reduction of slopes or 'thetas' or m's (there are all the same things) @10:03 3)now that the value of cost function for overfitted line is not minimum, another best line is selected by reducing the slopes or 'thetas' or m's which will also reflect in addition of lambda*(slope)^2 ,just this time slope added will be less. @13:45 4)doing this will overcome overfitting as the new best fit line will have less variance(more successful for training data) and less bias than our previous line @14:10 , the bias maybe more because it was 0 for overfitted line ,then it will be a bit more for the new line 5)lambda can be also called as scaling factor or inflation rate to manipulate the regularization. as for the question ,what happens if we have overfitted line with less steeper slope?, then i think we'll find the best fit line with even less steep slope(maybe close to slope~0 but !=0) @16:30 and tadaa!!!! we have reduced overfititng successfully!! please correct me if anything's wrong
@faizanzahid490
@faizanzahid490 4 жыл бұрын
I've same queries bro.
@supervickeyy1521
@supervickeyy1521 4 жыл бұрын
for 1st point. What if test data has the same slope value as that of train data? in such case there won't be overfitting correct ?
@angshumansarma2836
@angshumansarma2836 4 жыл бұрын
just remember the 4 th point that the main goal of regularization we just wanted to generalize better for the test dateset while having some errors in the test dateset
@chetankumarnaik9293
@chetankumarnaik9293 4 жыл бұрын
First of all, no linear regression can be built with just two data points. He is not aware of degree of freedom.
@Kmrabhinav569
@Kmrabhinav569 4 жыл бұрын
the basic idea is to use lambda (i.e. also known as the regularization parameter) to reduce the product term of Lambda*(slope). Here slope implies various values of m, such as if y = m1x1+m2x2 and so on... we have many values of m(i). So here, we try to adjust the value of lambda such that, the existence of those extra m(i) doesn't matter. And hence we are then able to remove them, i.e. remove the extra features from the model. And we are doing this as one of the major causes of overfitting is due to the addition of extra features. Hence by getting rid of these features, we can curb the problem of overfitting. Hope this helps.
@Sumta555
@Sumta555 4 жыл бұрын
18:35 How the features get removed where |slope| is very very less? Hats off for this fantastic clarity on the topic.
@dhirajkumarsahu999
@dhirajkumarsahu999 4 жыл бұрын
The regression works in such a way that, the coefficients of the inputs which are of less importance will keep on decreasing in every iteration. In case of ridge regression, the unimportant coefficients exponentially decrease (or asymptotically decrease). In which it will come very close to zero, but will never become zero. (Look the exponential graph for refrence) This is not the case with lasso regression. Hope this helps.
@gerardogutierrez4911
@gerardogutierrez4911 4 жыл бұрын
if you pause the video and just watch his facial movements and body movements, he looks like hes trying his best to convince you to stay with him during a break up. Then you turn on the audio and its like hes yelling at you to get you to understand something. Clearly, this man is passionate about teaching Ridge regression and knows a lot. I think its easier to follow when hes like checking up on you by saying, you need to understand this, and repeats words and uses his voice to emphasize concepts. I wish he could explain other things to me besides data science.
@TheMrIndiankid
@TheMrIndiankid 4 жыл бұрын
he will explain u the meaning of life too
@MrBemnet1
@MrBemnet1 4 жыл бұрын
my next project is counting head shakes in a youtube video .
@tanmay2771999
@tanmay2771999 3 жыл бұрын
@@MrBemnet1 Ngl that actually sounds interesting.
@ArunKumar-yb2jn
@ArunKumar-yb2jn 3 жыл бұрын
At 8:15 you say "a steep slope will always lead to overfitting case, why? I will just tell you now..." But I couldn't find where later on you have explained this.
@ganeshrao405
@ganeshrao405 3 жыл бұрын
Thank you soo much Krish, Linear regression + Ridge + Lasso cleared my concepts with your videos.
@auroshisray9140
@auroshisray9140 4 жыл бұрын
Hats offf...grateful for valuable content at 0 cost
@marijatosic217
@marijatosic217 4 жыл бұрын
Great video! I appreciate how hard his effort is to help us really understand the material!
@ChandanBehera-jp2me
@ChandanBehera-jp2me 3 жыл бұрын
i found your free videos better than some other paid tutorials...thanx for ur work
@TheR4Z0R996
@TheR4Z0R996 4 жыл бұрын
Keep up the good work, blessing from italy My friend :)
@HammadMalik
@HammadMalik 4 жыл бұрын
Thanks Krish for explaining the intuition behind Ridge and Lasso regression. Very helpful.
@datafuturelab_ssb4433
@datafuturelab_ssb4433 2 жыл бұрын
Best explanation on lasso n ridge regression ever on KZbin... Thanks krish... You nailed it...
@mumtahinhabib4314
@mumtahinhabib4314 4 жыл бұрын
This is where I have found the best explanation of ridge regression after searching a lot of videos and documentations. thank you sir
@somnathpatnaik2277
@somnathpatnaik2277 3 жыл бұрын
i have tried 4 very reputed organizations for doing courses all claim faculty from IIT and xyz high profile name. My feedback is if you are from IIT then that doesnt mean you are a good teacher, for teaching they should have passion like you had. When i see your lectures i enjoy learnings. Thank you
@indrasenareddyadulla8490
@indrasenareddyadulla8490 4 жыл бұрын
Sir, you have mentioned in your lecture this concept is complicated but never I felt it is so. you have explained very excellent.👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌
@koderr100
@koderr100 2 жыл бұрын
Now I finally got about key L2 and L3 difference. Thanks a lot!
@aish_waryaaa
@aish_waryaaa 2 жыл бұрын
Krish Sir you are saving my masters literally,up to date explanation,and the efforts you are putting to help us understand,Thank You so Much Sir.😇🥰
@mithunmiranda
@mithunmiranda Жыл бұрын
I wish I could like his videos multiple times. You are a great teacher, Kind Sir.
@sincerelysilvia
@sincerelysilvia 2 жыл бұрын
This is the most clearest and best explanation about this topic on youtube. I can't express how thankful I am for this video for finally understanding the concept
@rasengan4480
@rasengan4480 3 ай бұрын
5:20 you're moving your hands like a rapper sir
@dollysiharath4205
@dollysiharath4205 Жыл бұрын
You're the best trainer!! Thank you!
@adijambhulkar1742
@adijambhulkar1742 2 жыл бұрын
Hats off... What a way... What a way to explain man... Clear...all doubts
@vishalaaa1
@vishalaaa1 4 жыл бұрын
This naik is excellent. He is solving every ones problem.
@nehasrivastava8927
@nehasrivastava8927 4 жыл бұрын
best tutorials for machine learning with indepth intuition...i think there is no tutorial on utube like this...Thankuu sir..
@ajithsdevadiga1603
@ajithsdevadiga1603 11 ай бұрын
Thank you so much for this wonderful explanation, truly appreciate your efforts in helping the data science community.
@BoyClassicall
@BoyClassicall 5 жыл бұрын
Concept well explained. I've watch a lot of videos on Ridge regression but most well explained has shown mathematically the effect of lambda on slope.
@rishu4225
@rishu4225 6 ай бұрын
Thanks, the enthusiasm with which you teach also carries over to us. 🥰
@143balug
@143balug 4 жыл бұрын
Hi Krish, Your are making our confidence more on data scince with the clear explanations
@loganwalker454
@loganwalker454 3 жыл бұрын
Regularization was a very abstruse and knotty topic. However, after watching this video; it is a piece of cake Thank you, Krish
@abdulnafihkt4245
@abdulnafihkt4245 2 жыл бұрын
Best best best bestttttt class...hats off maaan
@fratcetinkaya8538
@fratcetinkaya8538 3 жыл бұрын
Here is where I understood that damn issue. I’m appreciated too much, thanks my dear friend :)
@ibrahimibrahim6735
@ibrahimibrahim6735 4 жыл бұрын
Thanks, Krish, I want to correct one thing here, the motivation behind the penalty is not to change the slop; it is to reduce the model's complexity. For example, consider the flowing tow models: f1: x + y + z + 2*x^2 + 5y^2 + z^2 =10 f2: 2*x^2 + 5y^2 + z^2 =15 f1 is more complicated than f2. Clearly, a complicated model has a higher chance of overfitting. By increasing lambda (the complexity factor), it is more likely to have a simpler model. Another example: f1: x + 2y + 10z + 5h + 30g = 100 f2: 10z + 30g = 120 f2 is simpler than f1. If both models have the same performance on the training data, we would like to use f2 as our model. Because it is a simpler model and a simpler model has less chance for overfitting.
@JoseAntonio-gu2fx
@JoseAntonio-gu2fx 4 жыл бұрын
Muchas gracias por compartir. Se agradece mucho el esfuerzo por aclarar los conceptos que es la base de partida para la resolución. Saludos desde España!
@sridhar7488
@sridhar7488 3 жыл бұрын
sí, es un tipo genial ... también me encanta ver sus videos!
@cyborg69420
@cyborg69420 Жыл бұрын
just wanted to say that I absolutely loved the video
@adinathshelke5827
@adinathshelke5827 10 ай бұрын
perfect explanationnnnnnnn. WAs wandering around for whole day. And at the end of the day, found this one.
@fatriantobong
@fatriantobong Жыл бұрын
i think you need to emphasize on the high variance toward the test data, and low variance toward the training data, the problem with overfitting is that this low variance on the training data comes at the expense of high variance on the test data. When the model is exposed to new, unseen data (the test data), it struggles to generalize because it has essentially memorized the noise and intricacies of the training data. This results in a significant difference between the model's predictions and the true values on the test data, indicating high variance on the test data.
@juozapasjurksa1400
@juozapasjurksa1400 3 жыл бұрын
Your explanations are sooo clear!
@αλήθεια-σ4κ
@αλήθεια-σ4κ 4 жыл бұрын
@6:20 If u understand linear regression start here
@moe45673
@moe45673 Жыл бұрын
Thank you! I thought this was a great explanation (as someone who has listened to a bunch of different ones trying to nail my understanding of this)
@rahul281981
@rahul281981 3 жыл бұрын
Very nicely explained, thank God I found your posts on KZbin while searching the stuff👍
@rayennenounou7065
@rayennenounou7065 3 жыл бұрын
I have a mémoire master 2 about lasso régression i need informations more informations about régression de lasso but in frensh can you help me
@BipinYadav-wn1pm
@BipinYadav-wn1pm 2 жыл бұрын
after going through tons of videos, finally found the best one, thnx!!
@gandhalijoshi9242
@gandhalijoshi9242 3 жыл бұрын
Very nice explanation. I have started watching your videos and your teaching style is very nice . Very nice you tube channel for understanding data science-Hats Off!!
@Zizou_2014
@Zizou_2014 4 жыл бұрын
Brilliantly done! Thanks Krish
@prashanths4455
@prashanths4455 4 жыл бұрын
Krish An excellent explanation. Thank you so much for this wonderful in-depth intuition.
@abhishekchatterjee9503
@abhishekchatterjee9503 4 жыл бұрын
You did a great job sir.... It helped me a lot in understanding this concept. In 20min I understood the basic of this concept. Thank you💯💯
@TheOntheskies
@TheOntheskies 3 жыл бұрын
Thank you, for the crystal clear explanation. Now I will remember Ridge and Lasso.
@creatorsayanb
@creatorsayanb 3 жыл бұрын
11:34 there is a symbolic error. It is > instead of
@fusionarun
@fusionarun 2 жыл бұрын
Yes, that's right. Even I was wondering the same
@rachitsarin6706
@rachitsarin6706 4 жыл бұрын
Dear, i would like to understand one thing that if my data requires my slope to be steeper How this concept will work in that case?
@devmani100
@devmani100 4 жыл бұрын
Hello Rachit, you have to check if your model is overfitting the data or not. Ridge (or Lasso) Regression is used when we want to tackle overfitting condition in Linear Regression. These 2 techniques are used to reduce the high variance of the Linear model. If your model requires high steepness, I am considering that the model you have is general model and is not overfitting the training data, then you can go with simple Linear regression, you don't need any type of regularization.
@oscarpo2979
@oscarpo2979 3 жыл бұрын
@@devmani100 I have to disagree with you, because you could still have a model that is overfitting the data, but where the y has a higher slope than y hat! Plus, if you flip the x and y axes of the equation, which would be a reciprocal slope, the ridge regression would essentially be increasing the slope of the data and penalizing low values.
@arunnandam4636
@arunnandam4636 3 жыл бұрын
@@oscarpo2979 Could you explain it clearly, Are you telling y is slope of simple LR model and y hat is slope of Ridge Regression?
@MuhammadAhmad-bx2rw
@MuhammadAhmad-bx2rw 3 жыл бұрын
Extraordinary talented Sir
@bhuvaraga
@bhuvaraga 2 жыл бұрын
Loved your energy sir and your conviction to explain and make it clear to your students. I know it is hard to look at the camera and talk - you nailed it. This video really helped me to understand the overall concept. My two cents, 1) Keep the camera focus on the white board I think it is autofocussing between you and the white board and maybe that is why you get that change in brightness also.
@vladimirkirichenko1972
@vladimirkirichenko1972 2 жыл бұрын
This man has a gift.
@yitbarekmirete6098
@yitbarekmirete6098 2 жыл бұрын
you are awesome, better than our professors in explaining such complex topics.
@JEEVANKUMAR-hf4ex
@JEEVANKUMAR-hf4ex 3 жыл бұрын
good explanation without touching any complex maths derivations.
@ankitchoudhary5585
@ankitchoudhary5585 3 жыл бұрын
@Krish Naik These penalties making the slope smooth not zero..these slopes are nothing but coefficients of the features(x1,x2,x3...) and we are trying to reduce the impact of a features which have high value of coefficient with respect to other coefficients(in case of multicollinearity some feature which are highly correlated will tend to confuse the gradient decent while it minimize the error function and which results in high value of coefficients of correlated features and eventually the higher values of coefficients tells the models that these are the most important deciding features for the target column, but it is a wrong prediction ...there are many reasons for the high values of coefficients of some features with respect to the other coefficients ) So by adding the new parameter(lambda×sum of squares of coefficients) to the error function (which will eventually minimized by gradient decent) we are telling the gradient decent to take care of the coefficients which are very high even if we have to loose some training accuracy (we want generalized model not model which shows best training accuracy but not a good predictor for new unseen data ??) statisticsbyjim.com/regression/multicollinearity-in-regression-analysis/
@316geek
@316geek 3 жыл бұрын
you make it look so easy, kudos to you Krish!!!
@ZubairAzamRawalakot
@ZubairAzamRawalakot Жыл бұрын
Very informative lecture dear. You explained with maximum detail. thanks
@anirbandey8999
@anirbandey8999 5 ай бұрын
Very good video to understand the intuition behind L1, L2
@MohsinKhan-rv7jj
@MohsinKhan-rv7jj 2 жыл бұрын
The kind of explanation is truly inspirational. I am truly overfitted by knowledge after seeing your video.❤
@abhishekkumar465
@abhishekkumar465 2 жыл бұрын
Reduce the rate of learning, this may help you as per Ridge regression :P
@vijaypalmanit
@vijaypalmanit 4 жыл бұрын
at 10:28, why would slope of line will decrease only ? it can increase also as there could be another fitting line with higher slope and can give the same error as line with higher slope, basically there could be 2 lines with different slope but still giving the same cost/loss for those two points, so while explaining why did you assume that another line would be the one only with lower slope ?
@aravindvasudev7921
@aravindvasudev7921 Жыл бұрын
Thank you. Now I got a clear idea on both these regression techniques.
@shaurabhsinha4121
@shaurabhsinha4121 3 жыл бұрын
Krish,but equation of a line with best generalized fit,eg y=Mx+c,can be possible with M being high-->AS ACTUAL DATAPOINT CAN BE closer and crowded near Y axis.So,steep slope can't be criteria.
@askpioneer
@askpioneer 2 жыл бұрын
well explained krish. thank you for creating . great work
@kanhataak1269
@kanhataak1269 4 жыл бұрын
After watching this lecture is not complicated... good teaching sir
@belllamoisiere8877
@belllamoisiere8877 2 жыл бұрын
Hello from México. Thank you for your tutorials, they are as if one of my class mates was explaining concepts to me in simple words. A suggestion, please include a short tutorial on ablation of Deep Learning Models.
@mohammedfaisal6714
@mohammedfaisal6714 5 жыл бұрын
Thanks a lot for your Support
@kanavsharma9562
@kanavsharma9562 3 жыл бұрын
I have watched more than 8 videos and 2-3 articles but didn't get how lambda value effect the slope ur video explain it best. Thanks
@heplaysguitar1090
@heplaysguitar1090 3 жыл бұрын
Just one word, Fantastic.
@anshulmangal2755
@anshulmangal2755 4 жыл бұрын
Sir great channel on KZbin for machine learning
@kanuparthisailikhith
@kanuparthisailikhith 4 жыл бұрын
The best tutorial I have seen till date on this topic. Thanks so much for clarity
@maheshurkude4007
@maheshurkude4007 4 жыл бұрын
thanks for explaining Buddy!
@Amir-English
@Amir-English 9 ай бұрын
You made it so simple! Thank you.
@lavanyasenthilkumar4814
@lavanyasenthilkumar4814 2 жыл бұрын
Krish,Thanks for the clear explanation. I have a doubt : if the test data points are below the best fit line we have selected earlier, this is fine. What if the test data points are above the best fit lines ? in that case , applying ridge regression , would still try to reduce the slope and we may not end up getting the best fit line. can i get a help in this scenario. Thanks in Advance!
@achilles2289
@achilles2289 Жыл бұрын
Same doubt I also have. I hope he explains this
@aseemjain007
@aseemjain007 6 ай бұрын
Brilliantly explained !! thankyou !!
@tsrnihar
@tsrnihar 2 жыл бұрын
Small correction - For Lasso regression, it is sum of mod of coefficients multiplied by the regularization parameter. You wrote it as mod of sum of coefficients multiplied by the regularization It is lambda*(|m1| + |m2| + ..) and not lambda*(|m1+ m2 + ...)
@unpluggedsaurav3186
@unpluggedsaurav3186 Жыл бұрын
True
@ENGMESkandaSVaidya
@ENGMESkandaSVaidya 3 жыл бұрын
Lasso regression does feature selection which is an extra thing done when compared to ridge regression
@binnypatel7061
@binnypatel7061 4 жыл бұрын
Awesome job.....keep up with the good work!
@Bedivine777angelprayer
@Bedivine777angelprayer Жыл бұрын
Thanks is there articles i can refer any blogs you recommend thanks again great content
@SahanPradeepthaThilakaratne
@SahanPradeepthaThilakaratne 7 ай бұрын
Your explanations are superbbb!
@MsGeetha123
@MsGeetha123 3 жыл бұрын
Excellent video!!! Thanks for a very good explanation.
@sandipansarkar9211
@sandipansarkar9211 4 жыл бұрын
Great explanation Krish.I think I a understanding a little bit about L1 andL2 regression.Thanks
@nehabalani7290
@nehabalani7290 3 жыл бұрын
Too good and short for ppl clear with basic modeling concepts
@robertasampong
@robertasampong Жыл бұрын
Absolutely excellent explanation!
@MrLoker121
@MrLoker121 3 жыл бұрын
Good video for beginners, a couple of pointers though: 1. The lasso regression would lead to |m1|+|m2|+|m3| +.... and not |m1+m2+m3+m4....| 2. The explanation why coefficients in L1 regularization would go to zero and not for L2 is missing. Probably can expand upon it theoretically.
@satyamchatterjee1074
@satyamchatterjee1074 4 жыл бұрын
Sir, what if the best fit line connecting two initial train points is already at a lower slope ,say [ (1,2) and (9,3)] ? Then we will need to increase the slope to avoid overfitting. How the penalizing concept helps in such condition ?
@ankitchoudhary5585
@ankitchoudhary5585 3 жыл бұрын
These penalties making the slope smooth not zero..these slopes are nothing but coefficients of the features(x1,x2,x3...) and we are trying to reduce the impact of a features which have high value of coefficient with respect to other coefficients(in case of multicollinearity some feature which are highly correlated will tend to confuse the gradient decent while it minimize the error function and which results in high value of coefficients of correlated features and eventually the higher values of coefficients tells the models that these are the most important deciding features for the target column, but it is a wrong prediction ...there are many reasons for the high values of coefficients of some features with respect to the other coefficients ) So by adding the new parameter(lambda×sum of squares of coefficients) to the error function (which will eventually minimized by gradient decent) we are telling the gradient decent to take care of the coefficients which are very high even if we have to loose some training accuracy (we want generalized model not model which shows best training accuracy but not a good predictor for new unseen data ??)
@ЕвгенийКузнецов-щ3д
@ЕвгенийКузнецов-щ3д 2 жыл бұрын
Thanks for the comment. Indeed, in the case of multicollinearity coefficients tend to be high in the absence of regularization. What are the other reasons for high coefficients? Seems like in case of independent features there is no need for regularization.
@dianafarhat9479
@dianafarhat9479 10 ай бұрын
Amazing explanation, thank you!
@rajk58
@rajk58 4 жыл бұрын
You sir, are amazing!!! Hats off to you!!
@VishalPatel-cd1hq
@VishalPatel-cd1hq 4 жыл бұрын
Hi Krish here we are adding the regularization term to our loss function and this regularization will always positive as lambda value is greater then 1 and slope value we will do square or taking mode then how we can say that it is penalizing . Infact here it is adding some positive value in the loss .
@veradesyatnikova2931
@veradesyatnikova2931 3 жыл бұрын
Thank you for the clear and intuitive explanation! Will surely come in handy for my exam
@yamika.
@yamika. 2 жыл бұрын
thank you for this! finally understood the topic
@sidduhedaginal
@sidduhedaginal 4 жыл бұрын
Just an awesome explanation. concepts are very clearly explained ...thanks for your true effort
@srtvenkat215
@srtvenkat215 4 жыл бұрын
Could you please share the link to the full playlist?
@sakshargupta875
@sakshargupta875 3 жыл бұрын
May be you could use different colour for highlighting the text before or after editing. That would be helpful and easy to grasp
Tutorial 28- Ridge and Lasso Regression using Python and Sklearn
9:51
SLIDE #shortssprintbrasil
0:31
Natan por Aí
Рет қаралды 49 МЛН
Маусымашар-2023 / Гала-концерт / АТУ қоштасу
1:27:35
Jaidarman OFFICIAL / JCI
Рет қаралды 390 М.
Жездуха 41-серия
36:26
Million Show
Рет қаралды 5 МЛН
Regularization Part 1: Ridge (L2) Regression
20:27
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Ridge vs Lasso Regression, Visualized!!!
9:06
StatQuest with Josh Starmer
Рет қаралды 267 М.
What is Agentic AI? Important For GEN AI In 2025
22:36
Krish Naik
Рет қаралды 73 М.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 350 М.
Lasso regression - explained
18:35
TileStats
Рет қаралды 20 М.
Stanford's FREE data science book and course are the best yet
4:52
Python Programmer
Рет қаралды 711 М.