Video pasand aaya ho? Really? It's really a synonym of perfection. Like I'm from a Non-IT background and you made it so easy and understandable I could ever imagine. Thank you so much for your efforts guruji 🙏
@DharmendraKumar-DS Жыл бұрын
Nice explanation...but where can I find the practical implementation video of this algorithm??
@arpanbiswas589911 ай бұрын
Hi Krish, I think the cost function would be 1/2m(summation from i=1 to m), where m is the batch size, and for the ridge and lasso term it would be summation from i=1 to n, where n corresponds to the number of features in the dataset. Could you please confirm it ?
@vivekjuwar2 жыл бұрын
Good job 👍 bro... Hum jaise lower English walo ke ye channel bahut sahi hai ❤
@gupta15yash2 жыл бұрын
Hi Krish, In lasso video we did 1)reduce overfitting and 2)perform feature selection and in elastic net we are doing the same then why elastic net?
Hi Krish, well explained. Thank you for making the concepts easy to catch-up. I have 2 doubts in i.e., 1. You explained the slope with correlation and is reduced to 0. How? If I'm not wrong, |slope| will result always a positive value, For eg. |-1| = |1| =1. 2. You explained in LASSO, we do this because to reduce overfitting and for feature selection. And for same reason we do elastic net too. Then what is the need of Elastic Net, it will only complex the thing and as well as take more computational power. Hope you will clear my doubts. bdw thanks for such a video.
@JackCoderr11 ай бұрын
why we use elasticNet because it helps to group the relative data (better than lasso reg.. ) -> handles multicollinearity better than that.
@ravipaliwal40412 ай бұрын
same doubt
@khalidal-reemi33612 жыл бұрын
dear krish you are considered as an international source for AI and data science and your followers are from all over the world. you excluded a wide slice of your followers in this video. All respect
@krishnaik062 жыл бұрын
I have made the video in English also
@ajaykushwaha42332 жыл бұрын
Bahot bhadiyan sir, is playlist ko aisa banaiye ki aaj theory Kal practical.
@kishormagar31602 жыл бұрын
Very nice sir 👍
@M.HuzaifaManzoor Жыл бұрын
sir ap na es ma mean squred error function use kea ha or us ko 2 sa divide be kea ha start ma jab kh 2 sa divide to squared error function ma krta hn ma es waja sa confuse hn kindly explain ke dan answer ma
@pkumar02123 ай бұрын
👌
@jagatkrishna15432 жыл бұрын
Thanks Sir 🙏💕
@pranabsarma182 жыл бұрын
Hi! Krish, may I know when you will implement these algorithms using Python which you have discussed in this ML Playlist?
@krishnaikhindi2 жыл бұрын
Soon
@alirathore68182 жыл бұрын
@@krishnaikhindi in ridge & lasso regression indepth math intuition video you said lasso prevent over fitting as well as help in feature selection??? now here you are saying lasso help only in feature selection
@DharmendraKumar-DS Жыл бұрын
@@alirathore6818bro.. Lasso also reduce overfitting as eventually are penalizing the cost function of linear regression...