Live Day 2- Discussing Ridge, Lasso And Logistic Regression Machine Learning Algorithms

  Рет қаралды 167,036

Krish Naik

2 жыл бұрын

Join the community session ineuron.ai/course/Mega-Community . Here All the materials will be uploaded.
Playlist: kzbin.info/www/bejne/Z2LYn6BondKphbM
The Oneneuron Lifetime subscription has been extended.
In Oneneuron platform you will be able to get 100+ courses(Monthly atleast 20 courses will be added based on your demand)
Features of the course
1. You can raise any course demand.(Fulfilled within 45-60 days)
2. You can access innovation lab from ineuron.
3. You can use our incubation based on your ideas
4. Live session coming soon(Mostly till Feb)
Use Coupon code KRISH10 for addition 10% discount.
And Many More.....
Enroll Now
OneNeuron Link: one-neuron.ineuron.ai/
Direct call to our Team incase of any queries
8788503778
6260726925
9538303385
866003424

Пікірлер: 121
@keenchkaat1543
@keenchkaat1543 2 жыл бұрын
Linear Ridge, Lasso And Logistic Regression: ------------------------------------------------------------------------- Part I: ------------------------------------------------------------------------- Agenda for the day: 1:47 Previous session recap: 6:03 Cost function: 6:25 7:47 Regression example: 7:20 Training data: 8:25 9:02 Overfitting: 9:13 10:30 Low bias and high variance: 11:45 19:17 Underfitting: 12:05 High bias and high variance: 13:45 19:30 Overfittting and underfitting scenarios: 18:20 Ridge and Lasso Regression situation: 22:00 22:30 Ridge Example: 25:38 29:50 Hyper parameters: 30:00 Lasso Regression: 32:44 36:00 (uses) Feature selection: 35:20 Cross validation: 37:00 Quick summary: 37:33 38:37 (ridge) 39:40 (lasso) 40:16 (purpose of lasso) Assumptions of Linear Regression: 46:30 ------------------------------------------------------------------------- Part II: ------------------------------------------------------------------------- Logistic Regression: 47:35 48:10 50:00(scenario) Why not Linear Regression? : 53:15 57:28 Squash: 59:00 Sigmoid function: 59:39 1:01:51 Assumptions: 1:02:44 Cost function: 1:09:38 1:15:00 1:16:15 1:19:20 Convex and Non-convex function: 1:10:45 Logistic regression algorithm: 1:22:00 Confusion Matrix: 1:29:50 Accuracy: 1:31:39 Imbalance dataset: 1:33:28 Precision and recall: 1:37:00 1:37:45 1:45:00 F score: 1:46:43 1:47:46(F 0.5 score) 1:48:38(F 2 score)
@narendratiwari4238
@narendratiwari4238 2 жыл бұрын
Thanks man
@anuragthakur5787
@anuragthakur5787 2 жыл бұрын
Thank you
@pankajkumarbarman765
@pankajkumarbarman765 2 жыл бұрын
Thanks man !
@keenchkaat1543
@keenchkaat1543 2 жыл бұрын
@@narendratiwari4238 welcome
@ayeshavlogsfun
@ayeshavlogsfun 2 жыл бұрын
Thanks
@aesthetic_muscle
@aesthetic_muscle 2 жыл бұрын
very comprehensive and amazing teaching sir. I can't thank you enough
@anweshapal8339
@anweshapal8339 2 жыл бұрын
amazing lecture,, can you explain gzlm linkage function in details .. i feel talking abouyt range of y and mx+c after conversion will help
@suriyaprakashkk9365
@suriyaprakashkk9365 2 жыл бұрын
ML 1 st session has 247K views.....But this 2 nd session has only 34K only. That is very bad. Peoples always loved to start anything. But after that they hate to continue those things. They didn't hold it. That's why peoples don't get that much of job offers and fail on interviews.
@SachinModi9
@SachinModi9 2 жыл бұрын
Super explanation of Ridge regression. Fundamentally its to prevent overfitting. Because cost is getting non zero. Algorithm tries to optimize the slope value. Ek teer do nishan Prevent overfit and slope is optimized due to new line
@ashilshah3376
@ashilshah3376 10 ай бұрын
thank you so much, this detailed structured videos are very helpful.
@talkswithRishabh
@talkswithRishabh 2 жыл бұрын
awesome sir really i wanna say thanks for this information in crisp manner thanks so much
@sckrockz
@sckrockz 2 жыл бұрын
Pls make similar live videos or recorded videos in basics of time series forecasting explaining all the concepts.
@shailuhbd
@shailuhbd 2 жыл бұрын
Well explained in simple way sir🙏
@NeeRaja_Sweet_Home
@NeeRaja_Sweet_Home 2 жыл бұрын
Hi Krish, Is the below steps was correct for regression problem. 1. In linear Regression Model first we will do EDA, Feature Engineering, Data Pre-processing and will split data into Train and Test. 2. Creating model using Linear Regression and evaluate the model like finding Loss and R2 Square. 3. If we could see more Loss then we have to do optimization using gradient decent and stochastic gradient decent for minimizing the Loss 4. Finally we have to check Bias and Variance trade-off if model getting overfitting then use L1 regularisation for preventing overfitting and L2 regularisation for preventing overfitting and feature selection as well. Thanks,
@nasheeeed
@nasheeeed 2 жыл бұрын
L1 regularisation is the Lasso regression that performs feature selection, not L2.
@kishansane8107
@kishansane8107 2 жыл бұрын
Thanks man ! god bless you
@ammar46
@ammar46 2 жыл бұрын
Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@symbolstarnongbri3411
@symbolstarnongbri3411 2 ай бұрын
Great work! Krish
@Priyanka_KumariNov
@Priyanka_KumariNov 2 жыл бұрын
@krish naik gone through multiple sites , and observing underfitting is High bias and low variance .
@ravinderbadishagandu2647
@ravinderbadishagandu2647 Жыл бұрын
thank you krish i am watching your ml algorithms again and again to make better
@retenim28
@retenim28 2 жыл бұрын
When I read about Linear Regression, I always see mentioned Ordinary Least Square as the most used algorithm to find the thetas parameters. Why didn't Krish mention it? Is it not important? Can anyone explain?
@naveedarshad6209
@naveedarshad6209 4 ай бұрын
00:27 The main topics of discussion are ridge and lasso regression, logistic regression, and the confusion matrix. 08:25 Overfitting and underfitting are two conditions that affect model accuracy. 22:28 L2 regularization adds a unique parameter or another sample value to minimize the cost function. 27:53 Ridge regularization is used to prevent overfitting by creating a generalized model. 39:15 Preventing overfitting and feature selection are the key purposes of ridge and lasso regression. 45:08 Logistic regression is a classification algorithm. 56:09 Logistic regression is used for binary classification problems with a decision boundary. 1:01:56 Logistic regression is used to create a sigmoid curve that helps in binary classification 1:13:03 Logistic regression cost function has specific equations for y=1 and y=0. 1:18:35 Logistic regression cost function and convergence algorithm 1:31:22 Calculation of basic accuracy and imbalanced data 1:37:06 The main aim of recall is to identify true positives. 1:48:48 F-score is calculated based on the value of beta Crafted by Merlin AI.
@minhaoling3056
@minhaoling3056 2 жыл бұрын
I think after your 7 days series on ML , DL, EDA, time series, we can participate in kaggle competition. This would be the most efficient way to learn data science ! Hope you can do the series for DL and EDA too !
@ammar46
@ammar46 2 жыл бұрын
Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@saurabhpatel5545
@saurabhpatel5545 11 ай бұрын
@@ammar46 most relevant comment to what @minhaoling3056 said
@kreetibhardwaj5180
@kreetibhardwaj5180 Жыл бұрын
awesome session.. thank you
@gh504
@gh504 2 жыл бұрын
Thank you sir.
@prathameshpashte6881
@prathameshpashte6881 2 жыл бұрын
Thanks
@shaikshamshunnisha7867
@shaikshamshunnisha7867 Жыл бұрын
Superb explanation sir wonderful 😊
@milanmishra309
@milanmishra309 6 ай бұрын
Low Bias, High Variance (Overfitting): When a model has low bias and high variance, it means that the model is able to fit the training data very well (low bias), but it is overly sensitive to the specific training examples and may not generalize well to new, unseen data (high variance). Overfitting is characterized by capturing noise or random fluctuations in the training data. To find an optimal model, there is a trade-off between bias and variance. The goal is to strike a balance that minimizes both bias and variance, leading to a model that generalizes well to new data. Techniques such as regularization and cross-validation are commonly used to address overfitting and find a suitable compromise between bias and variance.
@nikhili9559
@nikhili9559 2 жыл бұрын
now I need a pepto bismol after looking at the eqns
@shreedharchavan7033
@shreedharchavan7033 2 жыл бұрын
Excellent video
@Coden69
@Coden69 2 жыл бұрын
Thanks man
@sumitkumar-jm7yj
@sumitkumar-jm7yj 2 жыл бұрын
sir, you are great.
@ayeshavlogsfun
@ayeshavlogsfun 2 жыл бұрын
Please Cover Coding along with tutorial
@abhijeet3514
@abhijeet3514 2 жыл бұрын
many thanks sir many thanks
@EEBADUGANIVANJARIAKANKSH
@EEBADUGANIVANJARIAKANKSH 2 жыл бұрын
there was a small mistake in the explanation for lasso or L1 regression we are suppose to sum the mod of the slope not the mod of sum of slopes. both are different in video you wrote | theta0 + theta1 + theta2 + theta3 + theta4 + ... + theta_n | but in actual the L1 norm should be |theta0|+|theta1|+ |theta2| + |theta3| + ...+ |theta_n| hope u get my point Thank you
@SwarnaliMollickA
@SwarnaliMollickA 2 жыл бұрын
Thanks
@blankftw7388
@blankftw7388 Жыл бұрын
Thank you
@paneercheeseparatha
@paneercheeseparatha Жыл бұрын
Also there shouldn't be a 1/2 factor in logistic regression cost function. 1:22:35
@VIVEK-ld3ey
@VIVEK-ld3ey 2 жыл бұрын
If we square the less significant coefficients then it would be much better as the square value would reduce it further then according to this particular scenario ridge is better right
@ramdasprajapati7884
@ramdasprajapati7884 Жыл бұрын
Lovely one..
@rohanshetty6347
@rohanshetty6347 Жыл бұрын
thank you, sir,
@esotericwanderer6473
@esotericwanderer6473 Жыл бұрын
Please don't confuse learners, model should follow normal distribution is wrong. It is "Residuals should have normal distrbution". In linear regression errors are assumed to follow normal a normal distribution with a mean of zero.
@piyushbaweja5484
@piyushbaweja5484 Жыл бұрын
@Krish Naik Sir i am not able to find this content uploaded in mega community course. Please let me know how can i get these slides.
@yashwanthsai9304
@yashwanthsai9304 10 ай бұрын
bro please explain in terms of vectors and getting solutions of this eqs in vector
@kalluriramakrishna5732
@kalluriramakrishna5732 Жыл бұрын
Sir, Underfitting means High Bias and Low variance
@rafibasha1840
@rafibasha1840 2 жыл бұрын
1:10:01 ,Do we get convex function because of cost function or Becuase of sigmoid
@shrikantdeshmukh7951
@shrikantdeshmukh7951 2 жыл бұрын
There is big myth that normality assumption is for dependent feature But reality is Normality assumption is for residual (error) not for features Because if residual follow normal then its sum follow chisqure and then and then only ratio of msr/mse will follow f distribution
@zaheerbeg4810
@zaheerbeg4810 Жыл бұрын
#Thanks Sir
@sandipansarkar9211
@sandipansarkar9211 2 жыл бұрын
finished watching
@shivanibala7708
@shivanibala7708 2 жыл бұрын
Can u post a video on cooks distance and leverage
@debashiskundu_bcrec_it_6391
@debashiskundu_bcrec_it_6391 2 жыл бұрын
In logistic Regression , Our dependent feature may depend on multiple independent features at that time how can I deal with this???Thank you
@vagheeshmk3156
@vagheeshmk3156 7 ай бұрын
You are the Guru........🙏🙏🙏🙏🙏 #KingKrish
@sharemarket7840
@sharemarket7840 2 жыл бұрын
Great
@rafibasha4145
@rafibasha4145 2 жыл бұрын
Hi Krish ,please explain how slopes becomes 0 in case of Lasso
@Ajuppaan
@Ajuppaan 2 жыл бұрын
I have a doubt that he mentioned that lasso will do feature selection and ridge can't. The explanation he had given on that in ridge while squaring the slope it will increase but not in lasso... My doubt is if the feature is not important then its slope will be less than One. Then its square will again going to be so small...Its not going to increase... Then how slope ridge is not ineffective to feature selection...It should give more better result than lasso in that case...
@subhadeepjash3341
@subhadeepjash3341 27 күн бұрын
Under fitting means high bias and low variance. Please correct it
@sot_adbu_mne2_pps_spring207
@sot_adbu_mne2_pps_spring207 Жыл бұрын
Please give an example of Lasso Regression
@catchursam
@catchursam 2 жыл бұрын
Great session! some1 please help. I am unable to download material
@yogeshsapkal2593
@yogeshsapkal2593 2 жыл бұрын
Sir very very nice sir
@kangkankalita5221
@kangkankalita5221 2 жыл бұрын
when high Bias and High variance then predictions will be inconsistent and not accurate, Low bias and Low variance is an Ideal Model always.. Low Bias High Variance: Over fitting High Bias Low Variance :Under fitting
@sttauras
@sttauras Жыл бұрын
High bias High Variance: Underfitting. If the model performs poorly on train data, how will it perform good on test data? Clearly the model will not be able to generalise well.
@shubhnema1189
@shubhnema1189 Жыл бұрын
anybody has notes of this course, would be very helpful if someone can share them, or tell where to access them.
@rafibasha1840
@rafibasha1840 2 жыл бұрын
1:02 ,what is g(z) here Krish ,is it predicted variable y
@anshikakhandelwal_
@anshikakhandelwal_ 4 ай бұрын
Does anybody have the materials for these live sessions? I tried to find them on the link that's provided but that isn't working.
@anirbanpatra3017
@anirbanpatra3017 Жыл бұрын
Plz Update the study materials.
@asurma44
@asurma44 2 жыл бұрын
Can I know about live projects when it is starting????
@ajaykushwaha4233
@ajaykushwaha4233 2 жыл бұрын
Hi Krish, you have taught much better than Sudhansu.
@sunilyadav3098
@sunilyadav3098 2 жыл бұрын
sir, notes are not available in given link. it seems invalid link. Please provide it for practice.
@raghuvarun9541
@raghuvarun9541 7 ай бұрын
Anyone Can you please post the Notes over here. I'm unable to open the link. As it got expired.
@shivanshmishra8395
@shivanshmishra8395 4 ай бұрын
Please give the link for the notebook
@solo-ue4ii
@solo-ue4ii Жыл бұрын
just have a little doubt here :, 41:00 WHY WE DIDNT DIVIDE THE COST FUNCTION BY 2m?
@dikshagupta3276
@dikshagupta3276 2 жыл бұрын
In spam classification why we use precision
@bishnusharma9949
@bishnusharma9949 Жыл бұрын
High bias and low variance : For Underfitting : 14:26 min
@anuradhabalasubramanian9845
@anuradhabalasubramanian9845 Жыл бұрын
Hi Krish, Are the materials available even now ? How do I download ?
@SachinKumar-cn4ps
@SachinKumar-cn4ps 11 ай бұрын
Have you downloaded the material/resources.
@tanmaychakraborty7818
@tanmaychakraborty7818 2 жыл бұрын
Please arrange a coding session for mL
@Sajjad4739
@Sajjad4739 Жыл бұрын
Hy sir, my dataset containing 297 features and 9 types of prediction and results with Logistic regressions are low, why is it not a binary formate outcome so results are poor????
@its_udaysspecial6198
@its_udaysspecial6198 2 жыл бұрын
Hi Krish, I am not able to get into community forum to get this pdf file which you have written during the course. Are the documents removed from community forum.
@shubhnema1189
@shubhnema1189 Жыл бұрын
did you got the pdf, i too am unable to get it
@ultra_legend23
@ultra_legend23 2 жыл бұрын
Hi guys, asking this for a requirement I’m working on, how to reduce the false positives in my model? I’m getting 1700 positive predictions out of which the actual positives is 46. It would be great if someone help me. Thanks in advance!
@sanjeevtyagi501
@sanjeevtyagi501 2 жыл бұрын
Reduce the threshold or cutt off criteria for example, if probability is greater than .5 then y=1. Change it to .4 then .3. This will reduce your FP's but these will be rearranged somewhere mostly into FN's.
@chiku18053
@chiku18053 2 жыл бұрын
Overfiting and underfiting use
@chiragbhattad890
@chiragbhattad890 Жыл бұрын
Notes are not available on community
@ashwinmanickam
@ashwinmanickam 2 жыл бұрын
41:52 Assumptions of LR
@jitendranarkhede3819
@jitendranarkhede3819 2 жыл бұрын
Sir where can i get this PDF.
@gunjangandhi4405
@gunjangandhi4405 2 жыл бұрын
Are these for freshers ....?
@d-02-kanchigupta44
@d-02-kanchigupta44 6 ай бұрын
can someone share the PDF of this series
@dr.vishwadeepaksinghbaghel3500
@dr.vishwadeepaksinghbaghel3500 Жыл бұрын
linear regression
@sidindian1982
@sidindian1982 Жыл бұрын
14:32 Correction.. Underfitting occurs if the model or algorithm shows low variance but high bias (to contrast the opposite, overfitting from high variance and low bias). I
@sttauras
@sttauras Жыл бұрын
If the model has high bias, how will it have low variance?
@palvinderbhatia3941
@palvinderbhatia3941 2 жыл бұрын
Overfitting: Good performance on the training data, poor generliazation to other data (low bias but high variance). Underfitting: Poor performance on the training data and poor generalization to other data ( high bias and high variance).
@starab6901
@starab6901 2 жыл бұрын
Where are these notes
@user-yi7dr8ul2h
@user-yi7dr8ul2h 10 ай бұрын
i am unable to get the material
@hiteshr8514
@hiteshr8514 Ай бұрын
the notes link is not working
@magicharshil1730
@magicharshil1730 Жыл бұрын
I complete my boards Can i join is it relevant to me?
@SamBuchl
@SamBuchl 6 ай бұрын
Just published by @Krish Naik, new video describing Lasso and ElasticNet: kzbin.info/www/bejne/p5OtfKWihN2fgKM - with helpful numerical examples of how feature selection works in Lasso.
@kruan2661
@kruan2661 Жыл бұрын
I see no reason the (h0(x) - y)^2 for logistic regression is non-convex. 🧐
@annyd3406
@annyd3406 Жыл бұрын
Most important part 1:29:00
@shrikantdeshmukh7951
@shrikantdeshmukh7951 2 жыл бұрын
Assumption of linear regression Linearity normality of error Independence of error No autocorrelation Homoscedasticity residual variance equal and mean of residual equal to 0
@ammar46
@ammar46 2 жыл бұрын
True, Normal distribution of features is not an assumption of Linear Regression. We want normal distribution to avoid overfitting by outliers.
@chiku18053
@chiku18053 2 жыл бұрын
Sir hindi main bhi bata sakte ho kya
@krishnaik06
@krishnaik06 2 жыл бұрын
Already uploaded in Krish Hindi channel
@data_pathavan4585
@data_pathavan4585 2 жыл бұрын
I dont understand how underfitting = High bias and High variance
@data_pathavan4585
@data_pathavan4585 2 жыл бұрын
Please someone give me link to read about it
@pritampatra6077
@pritampatra6077 2 жыл бұрын
Underfitting - high bias Overfitting - high Variance
@kunjjani1683
@kunjjani1683 2 жыл бұрын
Bias relates to training data accuracy and Variance relates to testing data accuracy so when we get low accuracy on training data we get High Bias means the data is not fitted correctly similarly when we get low accuracy on testing data we get high variance which means the prediction is not accurate Hope the explanation helps..
@tom-shellby
@tom-shellby 2 жыл бұрын
Sir, if Logistic Regression is Classification problem then why it is called logistic regression and not logistic classification ???
@rupalacharyya4606
@rupalacharyya4606 2 жыл бұрын
Bcoz eventually it's predicting the probability of the dependent variable for a particular class, and hence the output is a continuous variable. Thus it's called Logistic regression
@ashabhumza3394
@ashabhumza3394 2 жыл бұрын
@@rupalacharyya4606 thanks, I also had the same confusion....but now it's clear with your explanation 👍
@ayushgupta2537
@ayushgupta2537 2 жыл бұрын
Please teach on white screen.
@TheGuts09
@TheGuts09 2 ай бұрын
why you are making most of the videos as members only content which were free before. is it a greed for money now?
@hafizhhasyhari
@hafizhhasyhari Жыл бұрын
Thanks
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 16 МЛН
Зачем он туда залез?
00:25
Vlad Samokatchik
Рет қаралды 2,6 МЛН
HAPPY BIRTHDAY @mozabrick 🎉 #cat #funny
00:36
SOFIADELMONSTRO
Рет қаралды 16 МЛН