To me, you are the first person who teaches Stats by telling story, it feels like I am chatting with you, not in the feeling of learning. Thank you so much ... Dustin ? I love your videos, one of the best educators I have watched!
@ghaiszriki79126 ай бұрын
As Einstein said: "If you can't explain it simply, you don't understand it well enough" Very clear and simply explained, thanks a million. 💚
@louizakloppenborg58132 ай бұрын
oh my god i graduated a year ago and remember finding it hard to understand this concept, and now one year later I randomly stumble on this video and I understand it completely by how simple you explained it. Impressive, thank you
@gabewinterful6 ай бұрын
I wish this video was out four years ago when I started analyzing my phd data, but glad to see it before the defense so I have some more confidence in explaining the analysis I’ve done in simpler words 😊 thanks a lot!
@adityaaware9844Ай бұрын
Best video i found on the topic. Cool way to summarize and explain. Liked the summary at each pointer.
@Jake-nl1jm3 ай бұрын
One suggestion to make sure is in your lessons on random effects could be to clarify the difference between a random effect and an interaction, although it may be too much to go into a full explanation I feel like a disclaimer/warning could at least be valuable. I feel like you explained random effects very well, but an inexperienced person coming into this may follow the theoretical explanation and then when you show the plots think, "OH! I know how to do that, you run 'lm(y~x1*x2), vs lm(y~x1+x2)!' and then be in for some pain later. I remember learning stats and having that misconception for a brief period. Thank you for the content you produce, it is valuable and appreciated, your content is excellent for learning and great for back-to-the-basics review.
@miqueasgamero727015 күн бұрын
Thank you Dwight K. Schrute.
@DDT-f8s2 ай бұрын
Best stat video online so far ❤❤❤😊😊
@WeirdPatagonia6 ай бұрын
Hello! Thank you for your video! Greetings from Chile :) That said, I have studied mixed models a bit and I still don't understand why someone would want a fixed intercept or a fixed slope. I know that if you assume the effect is always the same (like calorie consumption and weight gain), you could use a fixed slope. OK. But anyway, if you use random slopes in this situation, these slopes should be really similar, so it wouldn't make such a big difference, right? Why don't we just use random slopes and random intercepts all the time? If they are similar for each group, it will be OK, and if they are different for each group, great, we modeled it. Is there any advantage of a fixed slope over a random one?
@QuantPsych6 ай бұрын
Yes, there's an advantage. You're estimating one less parameter, save one degree of freedom, your standard errors shrink, and the model is easier to estimate. If you can fix it, always fix it.
@WeirdPatagonia6 ай бұрын
@@QuantPsych Crazy. Thanks for your answer
@anonymeironikerin28392 ай бұрын
Dude, you are my hero❤
@QuantPsych2 ай бұрын
Nay, thou art my hero
@ast33626 ай бұрын
But we do fixed intercepts when we have categorial data modeled by dummy variables right? 14:45
@absta19952 ай бұрын
Great video man! How do you calculate sample size and do a power analysis for these types of models?
@k00lkaneАй бұрын
Question: In terms of random or fixed parameters i.e., intercepts and slopes is this something that is decided prior to analysis or something that the analyses reveals. For instance, if you expect fixed slopes do you perform a fixed slope model or is this something that is revealed during the analysis?
@QuantPsych26 күн бұрын
Either way. If you're doing a confirmatory analysis, it should be decided before hand. If it's exploratory, it will probably be data-driven. For me, I'm usually in the middle, so I start with a suspicion about whether it's fixed, then evaluate that statistically.
@qwerty111111225 ай бұрын
12:58 you cant interpret a linear effect on its own when its square is significant in the model, right? Wouldnt this relate to fixing the intercept while allowing the slope to vary?
@QuantPsych5 ай бұрын
Correct with the first question. The two (linear and quadratic component) need to be interpreted together. I'm not following your second question.
@icupsy58306 ай бұрын
Thanks for your fantastic videos! The simpson's paradox often "solved" by adding an interaction term (X*cluster) in GLM and then conduct separate GLMs in each cluster in some psychological studies. Could you please help me clearify the differences between this method and HLM or MVM? Thanks!
@Lello9916 ай бұрын
An interaction term is different from a random effect on several levels: First, they serve two different purposes: an interaction term is needed when you're primarily interested in checking whether the effect of your predictor X is different (or remains significant) for different clusters. A significant interaction tells you that the effect of X varies significantly across the cluster's levels. Typically, when you find a significant interaction, you don't discuss the main effect of X (it's biased by definition) and you proceed by doing what can be called simple effect analysis or simple slope analysis. Namely, you measure the effect of X at each level of your cluster. So, if you have 3 clusters, you end up with 3 parameters and significance levels: Ex.: the effect of X for cluster 1 is b=0.5, p < .001, for cluster 2 is b=0.2, p = .07, and so on. Mixed effects don't do such a thing. They're not meant to check if the effect of X varies across clusters, or at least they don't give you a significance level for it (you can test the significance of random effects using likelihood ratio tests or other statistical methods to compare models with and without specific random effects, but it's a different thing). The extent to which the effect of X varies across clusters (variability) is incorporated into the model's random structure. Mixed models estimate the average effect of X across all clusters, while accounting for random variations in intercepts and slopes, which is way more informative than GLMs if you're interested in the main effect. Usually, clusters are participants' IDs, so way higher in number as opposed to what you'd use in a GLM with an interaction term. I hope this is helpful, and @Quant Psych approves =)
@charlieivarsson20805 ай бұрын
Could you show how a mixed model is used to evaluate a pharmacological effect over time. Let's say a psychiatric drug at week 0, 3, 9 and 12? How do you tell if the difference is significant?
@sjrigatti6 ай бұрын
How is a mixed effects model with random slopes and intercepts different from just fitting 3 different linear models, one for each cluster?
@luisa15515 ай бұрын
I have a question: in microbiology we work with strains, which are clones and genetically identical within a strain; same in Cancer research when we work with specific cell lines. If I understood you right, then the results are not independent if we use the same strain or same cell line for our biological assays?
@olenapo48954 ай бұрын
I don't think so. The readouts of every in vitro assay would be independent as it is continuous data (release of cytokine or protein level) which is affected by many parameters of your experimental setup, even you use same cell line. In current example as it is about survey the responses collected as a score and there are social factors which determine why they are dependent. Asking twins about their opinions about Trump doesn't mean that they can run distance with same pace
@DDT-f8s2 ай бұрын
I don't think so either. They will only be considered repeats if multiple cell lines are involved like you take cancer cell line from person A, B, C. So the data gotten from each cell under cell line A will be duplicates.
@Salvador_Dali6 ай бұрын
if you normalize the data to observe the relative change e.g., i guess it makes sense to fix the intercept, right?
@nosaosawe31586 ай бұрын
I don't think so. The normalized data would still take difference intercepts for each covariate
@LucaSubitoni6 ай бұрын
Is it possible to fit a Linear Mixed Effect model using a binary predictor (e.g. time factor: pre vs post) and then compute the significance of this factor? I read about the Satterthwaithe method which could be used to estimate the p value of the fixed model coefficients, is this correct?