Sir your videos deserve lot more views than it has. Best content found ever !!
@jamitkumar725113 күн бұрын
24:48 the X - axis i think, its not alpha, its W or for the dataset its the slope 'm', the weight
@siyays18682 жыл бұрын
Ur videos really clears everyone's doubt. Hatts off to ur dedication.
@MWASI-kk8nnАй бұрын
Thank you for the help sir ❤
@ParthivShah11 ай бұрын
Thank You Sir.
@AmitDas-ll4seАй бұрын
🙏Sir, You are my favourite 🙏
@ujjalroy14425 ай бұрын
Awesome sir...
@rockykumarverma9804 ай бұрын
Thank you so much sir🙏🙏🙏
@parthshukla10253 жыл бұрын
Thanks A Lot Sir !!!
@morhadiАй бұрын
Tip : The Graphs of Coeff vs Alpha or R2 vs Alpha are better visualised when Alpha is taken on a log scale.
@abhishekkukreja67352 жыл бұрын
Hi nitish sir, at 10:57 you said all the less impacted coefficients will be 0 but you said in ridge regression that when lambda is increased it impacts the highly impacted coefficients which then tends to infinity and so how in lasso we are able to decrease the less impacted coefficients , while increasing the lambda ???? will be looking for your reply nitish sir.
@sarveshjoshi2611Ай бұрын
He never said it.. instead he said, in ridge, if you increase the lambda, the coefficient will tend towards zero but they never will be zero and also the higher the weightage of coefficient the more will be they affected
@jamitkumar725113 күн бұрын
after 4 videos of ridge we get to know lasso is preferred over ridge 🥺
@gamesden8021 Жыл бұрын
sir your videos is so interesting but my question is circle and loss function contor plot pr hamara solution mil raha hai ridge regression sy jabky woh point error hoga because woh point tu local minima yeah global minima nahi hai
@sowmyak3326 Жыл бұрын
Hi sir, thanks a lot for your videos. I really learnt a lot. But, I have a small question should we consider the scale of independent variables? Wouldn't scale have an impact on coefficients?
@ajaykushwaha-je6mw3 жыл бұрын
Awesome video, sir ek request hai. Please ek video banaiye for Hypertuning L1 and L2 k liye so that hum best choose ker payein for both.
@campusx-official3 жыл бұрын
Okay. Noted
@SidIndian0822 жыл бұрын
@@campusx-official Sir Code : Understanding of Lasso Regression Key points. not able to download ..pls help
@yuktashinde36362 жыл бұрын
THANK YOU GURU
@near_.2 жыл бұрын
Are you doing any project
@uditjec8587 Жыл бұрын
@25:31 r2 score is negative. but r2 score can not be negative. Then how r2 score is negative here?
@anirbanmukherjee85742 жыл бұрын
As per SVM discussion, lambda is inversely proportional to alpha value. So, lambda increases bias should be low as it will lead to overfitting? Please let me know, if my understanding is right or wrong?
@gamesden8021 Жыл бұрын
sir my question is ky apny previous video ma kaha tha m higher hai tu woh fastly decrese kary ga jabky is ma apny kaha ju less important columns hain it think jis ka m small hai woh fastly equal to zero ho jahain gai plz solve my douts
@rohitdahiya66972 жыл бұрын
why there is no learning rate hyperparameter in scikit-learn lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
@KN-tx7sd2 жыл бұрын
Sir, thank you. You have described the effect of different values of Lamba on the feature selection. However, for a study with n number of features how do we know which lambda is the best no overfitting or no underfitting? Is there a standard formula/script that could be used to identify this value for lambda for any study?
@manishnayak97592 жыл бұрын
by cross validation technique you will get the best lamda
@coding_world_live92 жыл бұрын
Thanku sir
@GamerBoy-ii4jc3 жыл бұрын
Sir please make any telegram or whatsapp group for Student discussion. Thanks!
@SPARSHKUMAR-f4h10 ай бұрын
I have 1 confusion, ||W||^2 would be lambda * (W0^2 + W1^2 -----) right not lambda * (W1^2 + W2^2)
@suvithshetty23508 ай бұрын
consider only the slope as W0 is intercept you don't have to consider it.
@abhirupmukherjee64054 ай бұрын
Its summation i=1 to n Lambda Wi²
@jamitkumar725113 күн бұрын
its up to you, i think if you want to put W0 there
@RahulRathour-v3d4 ай бұрын
why it's called lasso regression?
@DataTalesByMuskan3 ай бұрын
Least Absolute Shrinkage and Selection Operator
@RohanOxob2 жыл бұрын
13:05
@jamitkumar725113 күн бұрын
22:30 aree but variance to already 0 ke ass paas hai 😂
@divyanshchaudhary70637 ай бұрын
Sir notes thode dhang se bna liya karo paid subs bhi le rakha hai pata nahi konsi chiz kha jaa rhi hai revesion ke time.