For past two days, I was watching different videos and reading articles to understand the core of ridge regression. I got tired as I wasn't understanding. And here I am, after half past this video, I think I've got the grasp. It'd be biased to say previous contents didn't help me at all, but your lecture is so much insightful than those. Thank you very much for sharing your learnings with us.
@UtkarshSingh-qw6md2 ай бұрын
Never ever have I seen such brilliance in teaching.
@TomatoPotato8749 ай бұрын
Charansparsh aapko 🙏
@youbeyou9769 Жыл бұрын
Omg this is brilliant. Exactly what I've been looking for. Thanks for making our lives easier
@quicksciencecalculation56762 жыл бұрын
Well explained sir please i request you never Stop
@1234manasm2 жыл бұрын
You are look like a genius and tech like a professor
@23injeela798 ай бұрын
very well explained . sir you have done a lot of hardwork on your lectures . keep going .
@yuktashinde36362 жыл бұрын
at 10:51 why we considered training datapoints in testing but yes there are other training points who will create y-y^ value
@ghostofuchiha812411 ай бұрын
Krish Naik hindi has explained this better ; Rest others till now ; CampusX seems good
@darshedits173211 ай бұрын
For only this algorithm krish naik is explained better or for all algorithms krish naik are explained better?
@SubhamKumar-bt9th4 ай бұрын
i watched his video just now after reading your comment, it's mostly same nothing better, even he has not explained why, "what if the incorrect fit line is already on the right side and imaginary true fit is on left ,then ridge will shift it more right away from true fit,, ".?"
@mustafizurrahman5699 Жыл бұрын
Mesmerising such lucid explanation
@krishnakanthmacherla44312 жыл бұрын
You are a game changer sir
@rahmankhan73032 ай бұрын
this is the first video whose concept was'nt very cleared for me but still great video sir
@sujithsaikalakonda48632 жыл бұрын
Great explanation sir.
@ParthivShah10 ай бұрын
Thank You Sir.
@RAKESHADHIKARI-h4b6 ай бұрын
Awesome series brother. Great work done by you..Looking for Mlops video
@UtkarshSinghchutiyaNo12 ай бұрын
Sir Legend ho aap
@Ishant875 Жыл бұрын
I appreciate your work and no one can teach you like you but there is just a thing, Overfitting doesn't mean high slope in simple linear regression. Overfitting means you have used very complex model which is not able to generalise for new data which is not in training. Simple linear regression is simplest model, so they can't be overfitting in it. There can be only underfitting.
@kindaeasy97978 ай бұрын
exactly , simple linear regression mai over fitting ho hi nahi skti , because the line will not bend to pass from each and every data point in the training data set , yes it can underfit and best fit
@arpitakar33845 ай бұрын
God of ML teaching
@Anjalisharma-wp2tg2 ай бұрын
Nitish!!!! You are amazing
@shashankbangera7753 Жыл бұрын
Beautiful explaination!
@krishcp7718 Жыл бұрын
Hi Nitish, Very nice video. Just one thing I noticed - around 04:00. For a given intercept b, when m changes, it is basically the orientation or the angle the line makes with the x-axis changing. So when m is either too small or too high, there is underfitting. As can be seen geometrically the line is quite away from the data points for high and low m. So the overfitting - meaning line is very close to the data points is for only certain values of m - particularly between high m and low m values. Please let me know your thoughts on this. Regards, Krish
@Ishant875 Жыл бұрын
That statement is incorrect
@manasyadav3159 Жыл бұрын
Hi, how can we say that if the slope is very high then it will be the case of overfitting, it can be underfitting also. I think high slope doesn't mean it will perfectly fit on our training data. Please help me out.
@ATHARVA893 жыл бұрын
Sir will you including SVM t-sne and all ahead in the 100 days ML playlist?
@campusx-official3 жыл бұрын
Yes
@harsh20142 жыл бұрын
Thanks for this session.
@osho_magic2 жыл бұрын
Sir what if the incorrect fit line is already on the right side and imaginary true fit is on left ,then ridge will shift it more right away from true fit,, ".? It become irregularisation. Isn’t it.?
@arman_shekh973 жыл бұрын
Sir this video has cames after 5 days , everything is fine now
@lingasodanapalli61510 ай бұрын
But sir why did you choose two training points above the actual dataset. If we chose those two training points below the actual dataset then the correct line's slope is higher than predicted lin's slope. So the loss of the predicted line's slow will be less
@ruchitkmeme44419 ай бұрын
exactly for this problem i came into comment box!! i mean if we give all data and not only that two point normal linear regretion will also choose that line that we want after ridge regression
@asifkdas Жыл бұрын
the slope value in a linear regression model does not directly indicate overfitting
@Tusharchitrakar Жыл бұрын
Yes ofcourse but I think what he's trying to suggest is that some suspiciously high values "might" be indicative of over fitting.
@kindaeasy97978 ай бұрын
but in the graph at 7:28 if jo 2 training points hai vo test points ke neeche hotai and best fit line ke slope ko increase krna padta na overfitting ko handle krne ke liye ??? I think is video ka logic flawed hai
@abhinavkale46323 жыл бұрын
Just one issue.. why did you multiply 0.9*3 while calculating the loss at second point?
@SidIndian0822 жыл бұрын
even i am confused on this .,..:-(
@balrajprajesh64732 жыл бұрын
It is clearly mentioned by Sir in the video that it is just an assumption.
@somanshkumar13252 жыл бұрын
There are two points in our training dataset -> (1,2.3) and (3,5.3). For calculating the loss at the second point, Yi = 5.3, Xi = 3. Y_hat = m*Xi + b where m=0.9, Xi = 3, b=1.5. Y_hat = 0.9*3+1.5 I hope it helps?
@abhinavkale46322 жыл бұрын
@@somanshkumar1325 yooo...
@BP-me7lj4 ай бұрын
The whole idea should be to reduce the overfitting of 1st line. But we are having 2nd linewith different parameters. There should be only first line and when we multiply it with lambda*m^2 then it should give less error. Here we already having 2nd line. When i calculated the loss without lambda term loss was even less. Idk. Someone please clearify this.
@d-pain48442 жыл бұрын
Sir thoda dark marker use Karo
@mohitkushwaha89742 жыл бұрын
Awesome
@ANUbhav9183 жыл бұрын
I guess, Bias is more compared to variance in Overfitting. Vice versa in Underfitting. Please correct me
@manikanteswarareddysurapur87692 жыл бұрын
It's opposite
@flakky626 Жыл бұрын
low bias in overfitting and low variance in underfitting
@shaikhsaniya35852 жыл бұрын
Regularization is regression here ??
@kindaeasy97978 ай бұрын
aapki jo overfitting wali line hai , vo toh low bias and high variance wali definition ko satisfy hi nahi kr rhi , i dont think that overfitting is possible in case of simple linear regg, because the line cant bend to pass from each and every data point of the training data set
@rohitdahiya66972 жыл бұрын
why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
@near_.2 жыл бұрын
Did u get the answer??
@rohitdahiya66972 жыл бұрын
@@near_. no still waiting for some expert to reply
@YogaNarasimhaEpuri2 жыл бұрын
I didn't thought about this... Just I seen from documentation, that All Solvers are not using Gradient Descent. (which i think) SAG - uses a Stochastic Average Gradient descent, The (step size/learning rate) is set to 1 / (alpha_scaled + L + fit_intercept) where L is the max sum of squares for over all samples. ‘svd’ uses a Singular Value Decomposition, (Matrices) cholesky, (Matrices) ... Otherwise, Like SAG, all solvers based upon the data and solver, automatically calculate the learning rate What's ur opinion?
@garimadhanania18538 ай бұрын
I really appreciate your effort and all your videos but I think the explanation is incorrect here. m being high is not the definition of overfitting in a typical linear regression with m + 1 weights, if we do not constrain the value of weights and let them be anything, then they can represent very complex functions and that causes overfitting we have to penalize large values of weights (by adding in the loss function) so that our function has lower capacity to represent complexity and hence it wont learn complex functions that just fit the training data well
@aaloo_ka_paratha12 күн бұрын
You're absolutely right. Overfitting occurs when the model becomes too complex, which can happen if the weights are unconstrained and grow too large, allowing the model to fit the noise in the data. Regularization techniques like Ridge regression help prevent this by adding a penalty to the weights, ensuring the model remains simpler and generalizes better to unseen data. Great explanation!
@arshad17813 жыл бұрын
THANK
@ashisharora96492 жыл бұрын
Sorry to say but you share the bookish knowledge this time. Practical intuition is not there. Adding something parallelly shifts the line upward. How does it able to make change in the slope? You said, mostly ds keeps this model as default as it will only be active if there is a situation of overfitting, Kindly explain that how? How model is able to find the best fit line of test set is that you assumed it on your own. Does algorithm do the same?
@campusx-official2 жыл бұрын
Regularization: kzbin.info/aero/PLKnIA16_RmvZuSEZ24Wlm13QpsfLlJBM4 Check out this playlist, maybe this will help
@pankajwalia1552 жыл бұрын
Koi please bata do training error generalized error testing error irreducible error kis section main hai mera exam hain 20 dec ko