This is the clearest video I have ever seen. Thanks!!
@noai34843 жыл бұрын
This saved my bachelor's thesis, thanks so much!
@DataDemystified3 жыл бұрын
Glad I could help!
@hyunkim36723 жыл бұрын
This is so helpful. Thanks!, Also very impressed by all the replies that you have been commenting!
@DataDemystified3 жыл бұрын
Happy to help!
@jschulz3 жыл бұрын
Can't wait for the simple effects video!
@DataDemystified3 жыл бұрын
It’s on the list…I promise!
@pablopinheiro51332 жыл бұрын
Great video. Thanks. Greetins from Brazil.
@DataDemystified2 жыл бұрын
Thanks for watching!
@ModernDayDebate Жыл бұрын
That helped a ton, thank you!
@Dalenatton3 жыл бұрын
Thank you so much for the informative video. I wonder if you could give us a link to your video where you explained how to determine whether the simple slope of each openness line is significant. Thank you so much in advance for your help!
@schester15432 жыл бұрын
Thank you for this video! Very useful and easy to understand. I just wondered if you could provide a link to the future video 'simple slope' analysis you mentioned. Many thanks
@ramikhatib24252 жыл бұрын
Great job at explaining, your regression videos are truly awesome! Can you produce a video on how to probe simple slopes? Thanks!
@mahmoudelmaghraby15482 ай бұрын
Thanks a lot. Why didn't you run the analysis with interaction using the raw scores?
@mary-janewheeler33942 жыл бұрын
Really helpful video! Do you have a video that gives guidance on how to report the interaction effect within the write up. Thanks!
@DataDemystified2 жыл бұрын
Thanks! Sorry, I don't.
@JANUSKWEEPEER3 жыл бұрын
Super useful indeed. Looking forward to the simple slope analysis .... Is this video still coming up?
@DataDemystified3 жыл бұрын
Glad you found it useful! Yes, soon I hope! Lots of content to produce and this is definitely on the list.
@lc80711 ай бұрын
thanks so much for the video! i'm a little confused about the interpretation of the main effect coefficients, as i had thought they each individually represent the change in the dependent variable when the other variable is at their mean, rather than when the other variable is 0 like you explained. or did you mean that since the variables are mean centered, when one equals 0 this effectively means it is at its mean value? can you please clarify this?
@johnmortenmichaelis56353 жыл бұрын
Super useful! However is there a way in SPSS to visualize the interaction effects in a more presentable graph? I mean without the scatter, ideally in an APA format.
@DataDemystified3 жыл бұрын
Thanks! Unfortunately I don’t know a way to format the graphs easily for apa styling. When I do it for papers I recreate all my graphs in excel and format accordingly. Sorry!
@palmerja172313 жыл бұрын
Great video, thank you! Can you direct me to the future video tutorial to test whether each of the three regression line fits are statistically significant?
@DataDemystified3 жыл бұрын
Thank you! Unfortunately I have not made those yet, but they are high on my list! If the meantime, I suggesting looking up either "spotlight analysis" or "floodlight analysis"...both do what I think you're looking for
@DeveshBatraonGPlus3 жыл бұрын
This a great video, thank you. I was wondering how would we quantify the effect size of the independent variable on the response variable? For instance, I'm using a similar setup with a mixed effects model using glmer function in R, and since I have both main and interaction effects for 'age', how can I calculate its effect size? I normally use odds ratios for this purpose. Also, will I even consider the main effects?
@DataDemystified3 жыл бұрын
Hi Devesh. Effect sizes can mean different things in different disciplines. For instance in psychology and the related fields (e.g. Consumer Behavior), effect sizes are usually calculated to help standardize comparisons (see a good long list here: en.wikipedia.org/wiki/Effect_size). In economics, in contrast, effect size often means "change" as in, what % change did we see when XYZ happened. For regressions, we typically talk about R^2 (or adjusted R^2) for the overall model, and you can compute CHANGE in R^2 to understand the influence of a single predictor. Alternatively ,there is an effect size known as f^2, which I don't often use, but can be done at the individual variable level. As for odds ratios, they make sense for logistic regressions, but not for linear regressions.
@theresetodd6598 Жыл бұрын
Hi Jeff, I understand centering the variables allows for increased interpretation. I'm wondering what's the impact of not centering the variables. I ran my analyses initially without interaction terms and then added them to the model in SPSS (I used blocks to look at added effects of different types of variables on my model). When I originally ran my analysis I did not center my variables. I'm confused about how centering them will change my results. Is it possible to leave the uncentered variables in the model but use a centered interaction term?
@lc80711 ай бұрын
i'm also wondering how the interpretation of the main effect coefficients changes when centering the main effect variables as opposed to running the regression without centering them...
@contessaa1671 Жыл бұрын
Thank you prof. for your great explanation. I do have a question though; I do have a positive interaction term and want to further run the analysis / understand the relationship by stratifying my moderator. Please a short explanation will be very helpful on how to proceed. Like for instance do I add my interaction term in the analysis with my stratified moderator? Do I keep my other independent variable continuous or do I group it as well? Please help
@NM-ng4dq2 жыл бұрын
Thank you for this video. I have a question. How can one run regression analyses per xx increase for a continuous variable?
@CarolineCha1182 жыл бұрын
Hi! How would you present these tables and graphs in APA formatting?
@durgadhakal94533 жыл бұрын
Really great! I would like to ask one question. How do we interpret the whole thing if the interaction effect is not significant (but other values are the same)?
@DataDemystified3 жыл бұрын
Just means that there is no dependency bertwwen the two predictor variables and the dependent variable. So you can interpret the other coefficients in their own
@federicotedeschi3841 Жыл бұрын
Thank you. Is there a way to get (let's call the two predictors X1 and X2) the range of values of X1 for which the slope of X2 is statistically significant (or the other way round)?
@alicerosew2 жыл бұрын
Do we also center the covariates?
@pradhyumnbansal51883 жыл бұрын
hey jeff, great video, i want to know why'd you create the third feature ( age*openness ) in your linear regression. Won't it cause the issues of collinearity with other features ?
@DataDemystified3 жыл бұрын
Great question. It would not cause collinearity because we centered the two underlying variables. Beyond that, the only possible way to test the interact effects of two predictor variables (IVs) on an outcome measure (DVs) is to construct an interaction term as done here. The whole point of that term is to test dependencies. I hope that helps!
@pradhyumnbansal51883 жыл бұрын
@@DataDemystified so i thought about it and please let me know if im right. X1 = age and X2 = openness and, Y = importance, now since we centered our IVs, E[X1] = E[X2] = 0 and thus, Cov(X1, X2) = E[X1*X2]. Therefore to check this effect of correlation between X1 and X2 we add that interaction term X1*X2 .
@DataDemystified3 жыл бұрын
@@pradhyumnbansal5188 I think you're most of the way there. What the interaction (X1*X2) checks is to see if the effect of X1 on Y depends on X2 (or, the compliment: if the effect of X2 on Y depends on X1). As in, the slope for X1's influence on Y differs as s function of X2 (if there is a significant interaction). If there is no interaction than the slope of X1 on Y is invariant with X2. Does that help?
@qizhang2033 жыл бұрын
Thanks for your explanation. Yet, I am confused about interpreting a positive interaction effect when the main effect I am concerned is negative. Can you help me?
@DataDemystified3 жыл бұрын
Hi Qi. Great question. The interpretation is the same regardless of the sign of the main effect. In other words, if you have a positive interaction, that means that the influence of one of your variables on the DV INCREASES as the other variable increases in value. The fact that the main effect one one of those IVs is positive or negative doesn't change that. I hope this helps!
@qizhang2033 жыл бұрын
@@DataDemystified Hi, Jeff, thanks for your quick reply. Yes, I interpreted interaction the way you said before, yet, recently, I read a paper interpreting a positive interaction as decreasing the influence of the variable on dv when the sign of main effect is negative. Later, me and my friends made a discussion and it seemed that the conclusion was the interaction effect is interpreted as playing strengthening role (vs weakening role) when the signs of interaction and main effect are the same (vs different). That’s why I asked you the question at first. After watching the above video, I created a set of data and draw the graph the way you did, it seemed to support me and my friends’ conclusion. I am sharing the data with you. Can you further reply to me after your analysis with the data? (IV: 2,4,3,3,6,5,4,3,4,3,7,5,2,8,5,9,9,6; moderator: 1,2,2,2,1,3,2,3,3,6,4,3,4,1,6,1,1,3; DV: 100, 90, 89, 99, 79,80,91,92,99,100,88,91,70,69,100,61,60,82)
@DataDemystified3 жыл бұрын
@@qizhang203 Hi Qi, I think the confusion may be that even with the positive interaction, in your data, the net effect of the IV (or the Moderator) on the DV is still negative. However, it is less negative due to the interaction. When I run your data, I get this: Y = 115.33 - 7.198 IV - 6.575 Moderator + 1.92 * IV * Moderator Consider what happens when the IV = 1. Then the model reduces to: Y = 115.33 - 7.198 - 6.575 Moderator + 1.92 * Moderator Which means that the effect of the moderator on Y is (-6.57 + 1.92) = -4.65 Now consider when IV = 2. The model now reduces to: Y = 115.33 - 7.198 x(2) - 6.575 Moderator + 1.92 * 2 * Moderator Which means that the effect of the moderator on Y= (-6.575 + 1.92x2) = -2.735 In other words, as Y increases, the influence of the Moderator on the DV becomes less negative, until it crosses 0 and then it becomes positive. So, a positive interaction effect changes the way one variable influences the DV, depending on the level of that other variable. When the coefficient is positive, it increases the influence of the other variable as the first variable increases. I hope that helps (BTW, what you call and IV and what you call a Moderator is irrelevant...they model is symmetric, so both are moderating variables).
@jenniferb1633 жыл бұрын
Hello, Thanks so much for your videos. Is there a way to do this with 9 independent variables?
@DataDemystified3 жыл бұрын
Hi Jennifer. Sure, it would basically be the same process, but you'll have a LOT of potential 2-way interactions: 9 Choose 2 = 36 interaction terms. You'd just make an interaciton term for every possible pair-wise comparison (a*b, a*c, a*d, etc...) Though I would STRONGLY caution you to correct for multiple comparisons if you do that (e.g. bonferroni correction).
@jenniferb1633 жыл бұрын
@@DataDemystified Thank you so much for replying, and yes I just watched your Bonferroni correction video that changed my results entirely. Could I do this for only the significant variables as I have run 2 multiple regressions and would that mean I need to do 72? as my dissertation is due tomorrow and my tutor told me to do an interaction plot. Only 3 variables were significant. Thank you again for your time.
@DataDemystified3 жыл бұрын
@@jenniferb163 You have to apply the correction to all tests run...however many that might be. And yes, if you ran 72 tests, with no a priori (in advance) predictions, you apply a x72 penalty.