Ridge Regression Part 1 | Geometric Intuition and Code | Regularized Linear Models

  Рет қаралды 94,623

CampusX

CampusX

Күн бұрын

Пікірлер: 60
@mdnoorsolaimansiam8821
@mdnoorsolaimansiam8821 2 жыл бұрын
For past two days, I was watching different videos and reading articles to understand the core of ridge regression. I got tired as I wasn't understanding. And here I am, after half past this video, I think I've got the grasp. It'd be biased to say previous contents didn't help me at all, but your lecture is so much insightful than those. Thank you very much for sharing your learnings with us.
@UtkarshSingh-qw6md
@UtkarshSingh-qw6md 2 ай бұрын
Never ever have I seen such brilliance in teaching.
@TomatoPotato874
@TomatoPotato874 9 ай бұрын
Charansparsh aapko 🙏
@youbeyou9769
@youbeyou9769 Жыл бұрын
Omg this is brilliant. Exactly what I've been looking for. Thanks for making our lives easier
@quicksciencecalculation5676
@quicksciencecalculation5676 2 жыл бұрын
Well explained sir please i request you never Stop
@1234manasm
@1234manasm 2 жыл бұрын
You are look like a genius and tech like a professor
@23injeela79
@23injeela79 8 ай бұрын
very well explained . sir you have done a lot of hardwork on your lectures . keep going .
@yuktashinde3636
@yuktashinde3636 2 жыл бұрын
at 10:51 why we considered training datapoints in testing but yes there are other training points who will create y-y^ value
@ghostofuchiha8124
@ghostofuchiha8124 11 ай бұрын
Krish Naik hindi has explained this better ; Rest others till now ; CampusX seems good
@darshedits1732
@darshedits1732 11 ай бұрын
For only this algorithm krish naik is explained better or for all algorithms krish naik are explained better?
@SubhamKumar-bt9th
@SubhamKumar-bt9th 4 ай бұрын
i watched his video just now after reading your comment, it's mostly same nothing better, even he has not explained why, "what if the incorrect fit line is already on the right side and imaginary true fit is on left ,then ridge will shift it more right away from true fit,, ".?"
@mustafizurrahman5699
@mustafizurrahman5699 Жыл бұрын
Mesmerising such lucid explanation
@krishnakanthmacherla4431
@krishnakanthmacherla4431 2 жыл бұрын
You are a game changer sir
@rahmankhan7303
@rahmankhan7303 2 ай бұрын
this is the first video whose concept was'nt very cleared for me but still great video sir
@sujithsaikalakonda4863
@sujithsaikalakonda4863 2 жыл бұрын
Great explanation sir.
@ParthivShah
@ParthivShah 10 ай бұрын
Thank You Sir.
@RAKESHADHIKARI-h4b
@RAKESHADHIKARI-h4b 6 ай бұрын
Awesome series brother. Great work done by you..Looking for Mlops video
@UtkarshSinghchutiyaNo1
@UtkarshSinghchutiyaNo1 2 ай бұрын
Sir Legend ho aap
@Ishant875
@Ishant875 Жыл бұрын
I appreciate your work and no one can teach you like you but there is just a thing, Overfitting doesn't mean high slope in simple linear regression. Overfitting means you have used very complex model which is not able to generalise for new data which is not in training. Simple linear regression is simplest model, so they can't be overfitting in it. There can be only underfitting.
@kindaeasy9797
@kindaeasy9797 8 ай бұрын
exactly , simple linear regression mai over fitting ho hi nahi skti , because the line will not bend to pass from each and every data point in the training data set , yes it can underfit and best fit
@arpitakar3384
@arpitakar3384 5 ай бұрын
God of ML teaching
@Anjalisharma-wp2tg
@Anjalisharma-wp2tg 2 ай бұрын
Nitish!!!! You are amazing
@shashankbangera7753
@shashankbangera7753 Жыл бұрын
Beautiful explaination!
@krishcp7718
@krishcp7718 Жыл бұрын
Hi Nitish, Very nice video. Just one thing I noticed - around 04:00. For a given intercept b, when m changes, it is basically the orientation or the angle the line makes with the x-axis changing. So when m is either too small or too high, there is underfitting. As can be seen geometrically the line is quite away from the data points for high and low m. So the overfitting - meaning line is very close to the data points is for only certain values of m - particularly between high m and low m values. Please let me know your thoughts on this. Regards, Krish
@Ishant875
@Ishant875 Жыл бұрын
That statement is incorrect
@manasyadav3159
@manasyadav3159 Жыл бұрын
Hi, how can we say that if the slope is very high then it will be the case of overfitting, it can be underfitting also. I think high slope doesn't mean it will perfectly fit on our training data. Please help me out.
@ATHARVA89
@ATHARVA89 3 жыл бұрын
Sir will you including SVM t-sne and all ahead in the 100 days ML playlist?
@campusx-official
@campusx-official 3 жыл бұрын
Yes
@harsh2014
@harsh2014 2 жыл бұрын
Thanks for this session.
@osho_magic
@osho_magic 2 жыл бұрын
Sir what if the incorrect fit line is already on the right side and imaginary true fit is on left ,then ridge will shift it more right away from true fit,, ".? It become irregularisation. Isn’t it.?
@arman_shekh97
@arman_shekh97 3 жыл бұрын
Sir this video has cames after 5 days , everything is fine now
@lingasodanapalli615
@lingasodanapalli615 10 ай бұрын
But sir why did you choose two training points above the actual dataset. If we chose those two training points below the actual dataset then the correct line's slope is higher than predicted lin's slope. So the loss of the predicted line's slow will be less
@ruchitkmeme4441
@ruchitkmeme4441 9 ай бұрын
exactly for this problem i came into comment box!! i mean if we give all data and not only that two point normal linear regretion will also choose that line that we want after ridge regression
@asifkdas
@asifkdas Жыл бұрын
the slope value in a linear regression model does not directly indicate overfitting
@Tusharchitrakar
@Tusharchitrakar Жыл бұрын
Yes ofcourse but I think what he's trying to suggest is that some suspiciously high values "might" be indicative of over fitting.
@kindaeasy9797
@kindaeasy9797 8 ай бұрын
but in the graph at 7:28 if jo 2 training points hai vo test points ke neeche hotai and best fit line ke slope ko increase krna padta na overfitting ko handle krne ke liye ??? I think is video ka logic flawed hai
@abhinavkale4632
@abhinavkale4632 3 жыл бұрын
Just one issue.. why did you multiply 0.9*3 while calculating the loss at second point?
@SidIndian082
@SidIndian082 2 жыл бұрын
even i am confused on this .,..:-(
@balrajprajesh6473
@balrajprajesh6473 2 жыл бұрын
It is clearly mentioned by Sir in the video that it is just an assumption.
@somanshkumar1325
@somanshkumar1325 2 жыл бұрын
There are two points in our training dataset -> (1,2.3) and (3,5.3). For calculating the loss at the second point, Yi = 5.3, Xi = 3. Y_hat = m*Xi + b where m=0.9, Xi = 3, b=1.5. Y_hat = 0.9*3+1.5 I hope it helps?
@abhinavkale4632
@abhinavkale4632 2 жыл бұрын
@@somanshkumar1325 yooo...
@BP-me7lj
@BP-me7lj 4 ай бұрын
The whole idea should be to reduce the overfitting of 1st line. But we are having 2nd linewith different parameters. There should be only first line and when we multiply it with lambda*m^2 then it should give less error. Here we already having 2nd line. When i calculated the loss without lambda term loss was even less. Idk. Someone please clearify this.
@d-pain4844
@d-pain4844 2 жыл бұрын
Sir thoda dark marker use Karo
@mohitkushwaha8974
@mohitkushwaha8974 2 жыл бұрын
Awesome
@ANUbhav918
@ANUbhav918 3 жыл бұрын
I guess, Bias is more compared to variance in Overfitting. Vice versa in Underfitting. Please correct me
@manikanteswarareddysurapur8769
@manikanteswarareddysurapur8769 2 жыл бұрын
It's opposite
@flakky626
@flakky626 Жыл бұрын
low bias in overfitting and low variance in underfitting
@shaikhsaniya3585
@shaikhsaniya3585 2 жыл бұрын
Regularization is regression here ??
@kindaeasy9797
@kindaeasy9797 8 ай бұрын
aapki jo overfitting wali line hai , vo toh low bias and high variance wali definition ko satisfy hi nahi kr rhi , i dont think that overfitting is possible in case of simple linear regg, because the line cant bend to pass from each and every data point of the training data set
@rohitdahiya6697
@rohitdahiya6697 2 жыл бұрын
why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
@near_.
@near_. 2 жыл бұрын
Did u get the answer??
@rohitdahiya6697
@rohitdahiya6697 2 жыл бұрын
@@near_. no still waiting for some expert to reply
@YogaNarasimhaEpuri
@YogaNarasimhaEpuri 2 жыл бұрын
I didn't thought about this... Just I seen from documentation, that All Solvers are not using Gradient Descent. (which i think) SAG - uses a Stochastic Average Gradient descent, The (step size/learning rate) is set to 1 / (alpha_scaled + L + fit_intercept) where L is the max sum of squares for over all samples. ‘svd’ uses a Singular Value Decomposition, (Matrices) cholesky, (Matrices) ... Otherwise, Like SAG, all solvers based upon the data and solver, automatically calculate the learning rate What's ur opinion?
@garimadhanania1853
@garimadhanania1853 8 ай бұрын
I really appreciate your effort and all your videos but I think the explanation is incorrect here. m being high is not the definition of overfitting in a typical linear regression with m + 1 weights, if we do not constrain the value of weights and let them be anything, then they can represent very complex functions and that causes overfitting we have to penalize large values of weights (by adding in the loss function) so that our function has lower capacity to represent complexity and hence it wont learn complex functions that just fit the training data well
@aaloo_ka_paratha
@aaloo_ka_paratha 12 күн бұрын
You're absolutely right. Overfitting occurs when the model becomes too complex, which can happen if the weights are unconstrained and grow too large, allowing the model to fit the noise in the data. Regularization techniques like Ridge regression help prevent this by adding a penalty to the weights, ensuring the model remains simpler and generalizes better to unseen data. Great explanation!
@arshad1781
@arshad1781 3 жыл бұрын
THANK
@ashisharora9649
@ashisharora9649 2 жыл бұрын
Sorry to say but you share the bookish knowledge this time. Practical intuition is not there. Adding something parallelly shifts the line upward. How does it able to make change in the slope? You said, mostly ds keeps this model as default as it will only be active if there is a situation of overfitting, Kindly explain that how? How model is able to find the best fit line of test set is that you assumed it on your own. Does algorithm do the same?
@campusx-official
@campusx-official 2 жыл бұрын
Regularization: kzbin.info/aero/PLKnIA16_RmvZuSEZ24Wlm13QpsfLlJBM4 Check out this playlist, maybe this will help
@pankajwalia155
@pankajwalia155 2 жыл бұрын
Koi please bata do training error generalized error testing error irreducible error kis section main hai mera exam hain 20 dec ko
@rk_dixit
@rk_dixit 5 ай бұрын
plz spend a lot more time on code
Какой я клей? | CLEX #shorts
0:59
CLEX
Рет қаралды 1,9 МЛН
Their Boat Engine Fell Off
0:13
Newsflare
Рет қаралды 15 МЛН
Thank you mommy 😊💝 #shorts
0:24
5-Minute Crafts HOUSE
Рет қаралды 33 МЛН
Learn Statistical Regression in 40 mins! My best video ever. Legit.
40:25
AI Is Making You An Illiterate Programmer
27:22
ThePrimeTime
Рет қаралды 228 М.
NVIDIA CEO Jensen Huang's Vision for the Future
1:03:03
Cleo Abram
Рет қаралды 647 М.
Why Lasso Regression creates sparsity?
24:30
CampusX
Рет қаралды 28 М.
DEEPSEEK Vs CHATGPT There Is A  Clear Winner !!
15:53
Rick Aqua
Рет қаралды 16 М.
Какой я клей? | CLEX #shorts
0:59
CLEX
Рет қаралды 1,9 МЛН