Tutorial 28-MultiCollinearity In Linear Regression- Part 2

  Рет қаралды 103,364

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 93
@karangupta6402
@karangupta6402 3 жыл бұрын
Thanks, Krish. One way that you suggested is to Drop the feature. There is one more thing that we can do is combine the features which we also called them as an interaction term (like multiply Age and years of experience, and then run your OLS model). This also seems to control collinearity to some extent
@ibrahimmondal9104
@ibrahimmondal9104 2 жыл бұрын
Now the concept of Multicollinearity is totally clear....thanks bhaiya🙂👍✌️
@prijwalrajavat5811
@prijwalrajavat5811 4 жыл бұрын
I was searching for the theory behind backward elimination ....everywhere just the procedure was given but why was explained by you ...Thank you, sir
@bonnyphilip8022
@bonnyphilip8022 2 жыл бұрын
You are simply awesome... Nothing else to say.. Thank you for your contribution and service...
@bhagwatchate7511
@bhagwatchate7511 3 жыл бұрын
Hello Krish, Thanks for the amazing explanation. I have one question as - Is it necessary to check summary and then conclude about multicollinearity. Can we proceed only with corr() function and conclude about multicollinearity. Thanks in advance, :) #multicollinearity #Krish Naik #Regression
@gargisingh9279
@gargisingh9279 3 жыл бұрын
@KrishNaik Sir !! What about another technique called as VIF(Variance inflation factor)?That is the technique also used mostly for multicollinearity
@StyleTrick
@StyleTrick 4 жыл бұрын
Krish, could you explain why AGE has the high P value and not Years of Experience? If you remove Years of Experience from the table, then Age has a P value of 0, as this makes sense as in the data as Age increases, Salary increases accordingly.
@rankerhill2335
@rankerhill2335 4 жыл бұрын
Unless you calculate Variance Inflation Factor, you cannot be certain that multicollinearity is nor present. Ideally you should be calculating partial correlation coefficient and compute (1/1-partial correlation coefficient) and see if this is greater than 2
@ganeshdevare7360
@ganeshdevare7360 3 жыл бұрын
Thank you very much sir for explanation at 00:00
@DataInsights2001
@DataInsights2001 3 жыл бұрын
Nice presentation! Dropping a feature is the one solution, sometimes you can’t drop and you may need that feature in the analysis what would you do? Combine features or division or multiplication? Or use factor analysis? Mostly the cases like Marketing Mix modeling the similar issues arise often. Please Try videos like Marketing Mix modeling, Optimization, and forecasting estimating ROI, IROI, response curves....
@muhammadiqbalbazmi9275
@muhammadiqbalbazmi9275 3 жыл бұрын
I think, instead of removing any feature we can combine both correlated features (in this context you talked about) Age+YearsOfExperience=> Seniority level. Problem solved. (domain knowledge required for feature engineering).
@likithliki1160
@likithliki1160 3 жыл бұрын
amazing Explanation
@ashwininaidu2302
@ashwininaidu2302 4 жыл бұрын
Hi Krish, Nice explaination of the concept. If you could make a video on Assumptions of Linear Regression ( theory and practical) , that would be of great help for us.
@shashankkhare1023
@shashankkhare1023 4 жыл бұрын
Hey Krish, do we really need to look at Std. error? I think looking at p-vaue is enough as p-value is calculated based on t-statistic, which is calculated based on coeff. and std. error. Correct me if I am wrong, thanks
@ruturajjadhav8905
@ruturajjadhav8905 3 жыл бұрын
1.Linearity 2.Homoscedascity 3.Multivariate normality 4.Indepence of errors. 5.Lack of multicollinearity. Please do a video on it. I am not getting these concepts.
@sahilgaikwad98
@sahilgaikwad98 4 жыл бұрын
thankyou for such a wonderful explanation sir
@aifarmerokay
@aifarmerokay 4 жыл бұрын
Thanks sir your explanation is really good
@Katarahul
@Katarahul 4 жыл бұрын
plotting a correlation matrix and analyzing it is a faster option right?
@Roblox5091
@Roblox5091 3 жыл бұрын
Yes
@abdullahshafi8865
@abdullahshafi8865 4 жыл бұрын
Intuition King is back.
@sachinborgave8094
@sachinborgave8094 4 жыл бұрын
Thanks Krish, Please upload further Deep Learning videos.
@peeyushyadav4991
@peeyushyadav4991 3 жыл бұрын
Wait if the newspaper column/feature is not correlated then how do we come to the conclusion that it can be dropped as the P-value is high? Are we making the conclusions based on the T-test values here as well?
@0SIGMA
@0SIGMA 3 жыл бұрын
WAH! beautiful sir..
@unezkazi4349
@unezkazi4349 3 жыл бұрын
So that means we should now have 0 expenditure on newspaper? And after removing the newspaper feature, if we do OLS again, can there be any other feature with p-value less than 0.5 in the new OLS model?
@jatin7836
@jatin7836 4 жыл бұрын
An imp ques, if anyone knows, please answer ---> "Why we give OLS, the X value(which is constant) and not the x values(which are our independent features)? if we are not giving the independent values to OLS, then how can it is showing the table in output(with X and y?)"
@shubhamkundu2228
@shubhamkundu2228 3 жыл бұрын
So to sum up, to identify multicollinearity, we use OLS (ordinary least squares) method to check the model summary for std error (values should be less not high), Rsquare value (should tend towards 1), p value (should
@kakranChitra
@kakranChitra 3 жыл бұрын
Thankss, good elaboration!!
@madhavilathamandaleeka5953
@madhavilathamandaleeka5953 3 жыл бұрын
What is the difference between OLS model and Linear regression with MSE ??......Do both give same ??.... plzz clear it 🙏🙏
@VishalPatel-cd1hq
@VishalPatel-cd1hq 4 жыл бұрын
Hi Sir, can we directly find covariance matrix of the feature data and find the determinant of the covariance matrix if it is Zero then we can directly say that our covariance matrix is singular and if it is singular then it having dependent variable. so there is Multi-Collinearity in our data ...
@ajaykushwaha4233
@ajaykushwaha4233 2 жыл бұрын
Will this approach work on data with huge features?
@raedalshehri7969
@raedalshehri7969 2 жыл бұрын
Thank you so much
@sumeersaifi6354
@sumeersaifi6354 3 жыл бұрын
at the last, you said 1 unit decrease in newspaper exp will result in one unit increase in sales. can you plzz explain it. because according to me it will result in change in profit but not in sales
@shreyasb.s3819
@shreyasb.s3819 4 жыл бұрын
Good explained. Thanks a lot
@maskman9630
@maskman9630 2 жыл бұрын
can we use the same process for logistic regression brother....?
@alexandremaillot803
@alexandremaillot803 4 жыл бұрын
Please After making a model with python , training, testing and saving it, how can I put new data through it for predictions ? Thanks for the content
@amansrivastava3081
@amansrivastava3081 3 жыл бұрын
for ex if here is how you have done made the regressor: reg=LinearRegression() reg.fit(x_train,y_train) then you can predict it like: reg.predict([[100]]) and you will get the predicted value if you have trained for multiple features, then use like reg.predict([[100,200]]) according to the dataframe you have trained!
@saumyagupta2606
@saumyagupta2606 2 жыл бұрын
Is it a good practice to find multicollinearity for other ml models also apart from linear? Is it necessary for other models as well?
@akramhossain9576
@akramhossain9576 4 жыл бұрын
I was looking for Multicollinearity problem in interaction terms. How to solve this problem? One suggestion is centring the variable! But still confused! When one of my interaction variables is dichotomous and coding =0 is important for me, then if I use centering, 0 coding will not be there anymore! So how to solve this? Can you please clarify this for me? Thanks
@adhvaithstudio6412
@adhvaithstudio6412 4 жыл бұрын
Can you please explain why p values is biased towards in Age variable why it is not for years of experience.
@Datacrunch777
@Datacrunch777 3 жыл бұрын
Sir kindly upload vdio on ridge regression for lambda value and how can simulate with respect to the basic estimators or formulas
@avinashmishra6783
@avinashmishra6783 3 жыл бұрын
Why multicollinearity arises? What happens that reduces adj Rsq?
@K-mk6pc
@K-mk6pc 2 жыл бұрын
Can anyone explain why we are adding constant 1 in the predictor variables?
@RitikSingh-ub7kc
@RitikSingh-ub7kc 4 жыл бұрын
Can we just use principle component analaysis to see if all variance is covered by reducing number of features here?
@shubhamchoudhary5461
@shubhamchoudhary5461 3 жыл бұрын
Is it like hypothesis testing??
@slowhanduchiha
@slowhanduchiha 4 жыл бұрын
In the 2nd example wasn't scaling important??
@РайымбекТолеугазиев
@РайымбекТолеугазиев 4 жыл бұрын
Cool video, but about heteroskedasticity?
@galymzhankenesbekov2924
@galymzhankenesbekov2924 4 жыл бұрын
very good
@JP-fi1bz
@JP-fi1bz 4 жыл бұрын
Isn't P value used for null hypothesis??
@adityakumar-sp4ki
@adityakumar-sp4ki 3 жыл бұрын
while Importing this library -> import statsmodels.api as sm, I'm getting this error -> module 'pandas' has no attribute 'Panel'
@abhishekverma549
@abhishekverma549 4 жыл бұрын
Sir what is P>[t]
@nagarjunakanneganti5953
@nagarjunakanneganti5953 3 жыл бұрын
If I remove years of exp instead of age just by correlation matrix. It will work right? I don't have to check for p values as if I can construct a LR model with just age could change the p value? Any thoughts @channel
@guneet556
@guneet556 4 жыл бұрын
Hi krish kindly please answer to this question of mine for under and oversampling data the techniques u have mentioned is applied on numneric datatypes for categorical datatypes first we have to encode in some 0,1 then further SMOTETomek and neamiss wil apply that is the proper way to deal ?? and if we use trees approach then it will straight away deal this imbalance and encoding thing itself ?? Please krish reply to this question it will be a big help!
@krishnaik06
@krishnaik06 4 жыл бұрын
Yes tree approach will solve that problem. But understand tree techniques usually require lot of time
@guneet556
@guneet556 4 жыл бұрын
@@krishnaik06 sir thanks a lot i m just a beginner follow ur each video thoroughly . Sir can u make a video on how to built ur resume for guys looking for transition in ds.
@chitranshaagarwal4676
@chitranshaagarwal4676 Жыл бұрын
Please make the topics in order, getting confused
@amitkhandelwal8030
@amitkhandelwal8030 3 жыл бұрын
sir why you cannot choose tv as B0.??
@abhi9raj776
@abhi9raj776 4 жыл бұрын
thanx a lot sir !
@deepaktripathi892
@deepaktripathi892 4 жыл бұрын
As year and age are related how to decide which one to drop?
@Mohit-im1rp
@Mohit-im1rp 4 жыл бұрын
what is the p value in above explanation?
@harshstrum
@harshstrum 4 жыл бұрын
Hi bhaiya, didn't get why you choose to drop age feature over year of experience.
@krishnaik06
@krishnaik06 4 жыл бұрын
because age and experience are highly correlated and we could see the p value of age was greater than 0.05
@syedhamzajamil4490
@syedhamzajamil4490 4 жыл бұрын
Sir Can i use PCA for ignoring Multi_ colinearily
@abhishekchanda4002
@abhishekchanda4002 4 жыл бұрын
Hi Krish, I got a doubt. If i ignore multicollinearity in the second example, how will it affect the prediction?
@SayantanSenBony
@SayantanSenBony 4 жыл бұрын
Hi Karish, i have one question, which i am facing daily basic in realtime, How to detect heteroscedasticity and what is the method to rectify it in python ?
@simonelgarrad
@simonelgarrad 3 жыл бұрын
There are many tests available. You can research on Bartlett's test. It assumes that the samples come from populations having same variances. There is another called Goldfeld quandt test. To rectify in python, what I Believe is that when working with MLR, if the normality assumption is satisfied then, you shouldnt get heteroscedasticity. So if we have variables that aren't following normal distribution, they can be transformed (refer Krish's video for that too) and then I dont think heteroscedasticity should be present.
@nithinmamidala
@nithinmamidala 4 жыл бұрын
correlation part 1 video is missing please upload and rearrange with sequence
@adhvaithstudio6412
@adhvaithstudio6412 4 жыл бұрын
why you are ignoring age only why can't we ignore years of expereince?
@dhirajbaruah9888
@dhirajbaruah9888 4 жыл бұрын
What does the p value mean??
@dikshagupta3276
@dikshagupta3276 2 жыл бұрын
Pls add the link of colinearity
@sejalchandra2114
@sejalchandra2114 4 жыл бұрын
Hi sir, I have a doubt. Could we use this if we have one hot encoded features in the dataset?
@saidurgakameshkota1246
@saidurgakameshkota1246 4 жыл бұрын
Did you get the answer
@nidhipandey8004
@nidhipandey8004 3 жыл бұрын
What is the difference in OLS and gradient descent?
@simonelgarrad
@simonelgarrad 3 жыл бұрын
What i have read on it and understood as of now is, that OLS is a good method for simple linear regression. Gradient descent is better method when working with many Independent variables. You can understand more on this by watching Josh Stramer's Gradient descent video
@SatnamSingh-cm2vt
@SatnamSingh-cm2vt 4 жыл бұрын
what does p-value signifies?
@shivanshsingh5555
@shivanshsingh5555 4 жыл бұрын
why u r keep saying 0.5 again and again sir. This is not cleared. Plz send the link for the reference coz i m watching this playlist step by step but im still not getting this 0.5 criteria
@jyothinkjayan6508
@jyothinkjayan6508 4 жыл бұрын
When will the next deep learning batch begins
@Vignesh0206
@Vignesh0206 4 жыл бұрын
may i know is there a part one of this video?
@jatashukla6891
@jatashukla6891 4 жыл бұрын
hi krish i had bought your package of 300 rs on youtube , just to get in connect with you .i need your time to answer some of my doubts.Its regarding switching my career into datascience.Please let me know way to connect with you directly
@raghavchhabra4783
@raghavchhabra4783 4 жыл бұрын
dekhlia, pr samjh nhi aya agar 150+ columns honge to kaise krna hai!!
@sahilgaikwad98
@sahilgaikwad98 4 жыл бұрын
use iloc to get independent variables and store it in X and store target in Y and use OLS model
@namratarajput7092
@namratarajput7092 4 жыл бұрын
hi i am getting error after running this line "import statsmodels.api as sm" ImportError: cannot import name 'factorial' i am beginner so plz help
@tanweerkhan3020
@tanweerkhan3020 4 жыл бұрын
You need to downgrade your scipy or install statsmodels from master. You can check this in stackoverflow
@sushilchauhan2586
@sushilchauhan2586 4 жыл бұрын
What if our feature are in 1000's ? pls reply any one who knows can answer
@krishnaik06
@krishnaik06 4 жыл бұрын
For that we will apply dimensionality reduction
@sushilchauhan2586
@sushilchauhan2586 4 жыл бұрын
@@krishnaik06 Kabhi socha nahi tha, ki BHAI khud reply denge
@sushilchauhan2586
@sushilchauhan2586 4 жыл бұрын
@@krishnaik06 sorry we cant apply dimensionality reduction as its only deal with high variance and has no relation with our class output... we will be using randomized lasso regression or randomized logistic regression.. thank you krish bhai
@guneet556
@guneet556 4 жыл бұрын
Sorry but have to ask again in my previous comment can someone correct me ??
@a_wise_person
@a_wise_person 4 жыл бұрын
How is Dell ispiron i5 8th gen 2tb HDD for data science and programming work.
@srujohn652
@srujohn652 4 жыл бұрын
Its pretty enough though you can use google colab for your ML projects
@adityapathania3618
@adityapathania3618 3 жыл бұрын
bhai data set to de dia kro
@pushkarasharma3746
@pushkarasharma3746 4 жыл бұрын
everything is fine Krish sir but pls try not to say particular in every sentence...it is very annoying
@ayushasati2110
@ayushasati2110 3 жыл бұрын
your voice is not that much clear
Linear Regression in Python - Full Project for Beginners
50:52
Alejandro AO - Software & Ai
Рет қаралды 30 М.
MAGIC TIME ​⁠@Whoispelagheya
00:28
MasomkaMagic
Рет қаралды 38 МЛН
Perfect Pitch Challenge? Easy! 🎤😎| Free Fire Official
00:13
Garena Free Fire Global
Рет қаралды 58 МЛН
Human vs Jet Engine
00:19
MrBeast
Рет қаралды 184 МЛН
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 305 М.
Regression Analysis | Full Course
45:17
DATAtab
Рет қаралды 838 М.
Linear Regression Summary in R
10:05
Dataslice
Рет қаралды 59 М.
Proper Roadmap To Follow My Udemy Courses For Various Job Roles
15:29
MAGIC TIME ​⁠@Whoispelagheya
00:28
MasomkaMagic
Рет қаралды 38 МЛН