Fantastic. Thanks for adding another worked example other than the binomial. It's always really helpful to see how these broad frameworks and approaches work for different situations, i.e. here different models.
@Stats4EveryoneКүн бұрын
Thanks for the feedback. Glad you found this video to be helpful :)
@joelrosberg8 күн бұрын
Thank you!! This was super helpful! One question: what would the equation look like for teo proportions? Thank tou!
@Computervirusworld10 күн бұрын
thanks
@GbengaA-x8l10 күн бұрын
Thank you ❤
@pippalin816611 күн бұрын
you saved my life
@mikaboshi456912 күн бұрын
Thank you, very good explanation :). Greetings from germany
@Odinsomniac13 күн бұрын
I am little bit confused because the intersection symbol not using in some part of video
@TIGEEstudio15 күн бұрын
Thank you very much🙏🏾. This was really helpful
@Kwinnbujik17 күн бұрын
Excellent proof. You are a savior
@computerpillar828519 күн бұрын
Unique video, but a few views and like. Please go on , don't stop okay, and also could you make video on Euler's number(e) please. once you did it put the link in reply!
@cuttingedgetechsongsmovies966221 күн бұрын
Can't you solve this without using Excel? This isn't helpful for someone preparing for Exam
@Stats4Everyone21 күн бұрын
Here are 3 other videos on my channel that discuss the binomial distribution using a hand calculator and binomial tables: kzbin.info/www/bejne/g6PPZoFqd7apqsUsi=YKkg3DWKarKjTcOr kzbin.info/www/bejne/r37InWaefJ5qaNU kzbin.info/www/bejne/b3vGkpx9oqd7nMU Hope this is helpful. Good luck on your exam! :)
@Nabekihassen23 күн бұрын
Anyone 2025?
@level_io24 күн бұрын
thanks!
@zubairhasany409224 күн бұрын
how you converted into 3 parts the integral parts?
@itexso25 күн бұрын
great video nice it's so simple lol
@Stats4Everyone25 күн бұрын
Thanks! Happy to hear that you found this video to be helpful :-)
@eliacampigotto263227 күн бұрын
Why aren't you summing over a in the discreet case and integrating over a as a support in the continous case?
@Stats4Everyone27 күн бұрын
Good question. When finding expected value, we always integrate over and sum over the random variable. a is not a random variable, it is a constant. Here is another video that might be helpful: kzbin.info/www/bejne/e4DGfJuLqNZjZtksi=boUsfDkfrskbOq7P
@ДенисЛогвинов-з6е27 күн бұрын
I know that the expectation is but a Lebesgue integral, yet I can't help but ask what happens in these simplified definitions of the expected value for discrete and absolute continuous r.v.s when X is discrete and Y is continuous. What is E(X+Y), and what would be a logical explanation in simple terms regarding why the formula still holds?
@Stats4Everyone27 күн бұрын
Good question. Yes, the formula will still hold if X is discrete and Y is continuous. For the logical explanation... let us think about an example, suppose X is the Head or Tail (0 or 1 - discrete random variable) on a coin flip, and Y is the result of a random number generator between 0 and 1 (continuous random variable). If we randomly sampled X’s and Y’s and then found the average value of their sum, it would not matter if we first took the average of Xs and then added it to the averages of Ys, or if we took the sum of each X and Y, and then found the average of the sums - both would yield the same answer.
@podverse0127 күн бұрын
muito obrigado!
@Stats4Everyone27 күн бұрын
Glad you found this video to be helpful! :-)
@aysan751327 күн бұрын
thanks alot for this content
@JemalMohammed-z1h29 күн бұрын
Tanks i am from Ethiopia
@MenbereMena-pr3ekАй бұрын
Thanks alot it helps alot❤❤
@willsonperdigao2300Ай бұрын
Excelente
@lambdacalculus3385Ай бұрын
one of the best stats channel on youtube. thank you very much for your effort! ❤
@Stats4EveryoneАй бұрын
Glad you are finding my content helpful :) thank you for the support!
@abebayehubirhanuАй бұрын
Thank you ❤❤❤
@rounakrajak-25-viii-a52Ай бұрын
❤india,hai hind, nice concept
@dillonreichman4696Ай бұрын
Hi I think it is important to just revise the common identities of matrix vector differentiation - when I studied my degree in statistics and when we did linear models - this was something I did not really understand until now.
@LyndaLiuАй бұрын
If X~N(10, 4), X bar = (X1+X2+X3)/3; what’s var (X bar - X3)? If I compute var (X bar) + Var (X3)=4/3+4=16/3. But if I simplify X bar - X3 to (X1+X2-2X3)/3, then the variance becomes (4+4+4*4)/9=8/3. How come they are different? Thanks for your help!
@Stats4EveryoneАй бұрын
Xbar and x3 are not independent. Therefore, the variance of xbar - x3 is not equal to the variance of xbar plus the variance of x3. Rather: Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3) Now… to find the cov(xbar, x3)… Cov(xbar, x3) = cov(1/n sum(xi) , x3) = 1/n sum (cov (xi, x3)) Since xi is independent of x3 for all cases except when i = 3, the cov(xi, x3) = 0 except for when i =3. When i is 3, then cov(x3, x3) = var(x3) = 4. Therefore, Cov(xbar, x3) = 1/n * 4 = 4/3 Now, we have: Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3) = 4/3 + 4 - 2*4/3 = 4/3 + 12/3 - 8/3 = 8/3 Hopefully this helps. Thanks for the interesting problem!
@Stats4Everyone28 күн бұрын
You may find this video to be helpful: kzbin.info/www/bejne/rmbVk39rmtmJbpo
@LyndaLiu26 күн бұрын
@@Stats4Everyone thank you very much!
@LyndaLiuАй бұрын
If X~N(10, 4), X bar = (X1+X2+X3)/3; what’s var (X bar - X3)? If I compute var (X bar) + Var (X3)=4/3+4=16/3. But if I simplify X bar - X3 to (X1+X2-2X3)/3, then the variance becomes (4+4+4*4)/9=8/3. How come they are different? Thanks for your help!
@Stats4EveryoneАй бұрын
Xbar and x3 are not independent. Therefore, the variance of xbar - x3 is not equal to the variance of xbar plus the variance of x3. Rather: Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3) Now… to find the cov(xbar, x3)… Cov(xbar, x3) = cov(1/n sum(xi) , x3) = 1/n sum (cov (xi, x3)) Since xi is independent of x3 for all cases except when i = 3, the cov(xi, x3) = 0 except for when i =3. When i is 3, then cov(x3, x3) = var(x3) = 4. Therefore, Cov(xbar, x3) = 1/n * 4 = 4/3 Now, we have: Var(xbar - x3) = var(xbar) + var(x3) - 2cov(xbar, x3) = 4/3 + 4 - 2*4/3 = 4/3 + 12/3 - 8/3 = 8/3 Hopefully this helps. Thanks for the interesting problem!
@mayday7675Ай бұрын
Very helpful video. I only recently discovered that APY is not the same as monthly percentage, lol. Math is fun especially when it's about interest you earn. Thanks for the interesting video.
@beula1985Ай бұрын
Nice
@Wonderofu-hh8oiАй бұрын
Thanks today is my test
@tunjiadewoye448Ай бұрын
Wonderful proof. Really easy to follow along. You have really mastered the art of explanation
@Stats4EveryoneАй бұрын
Thanks so much! So happy to hear that you found this to be helpful
@papercl1pmaximizerАй бұрын
When i calculate cov(Y, Y-hat) = E[Y.Y-hat] - E[Y]E[Y-hat] i get (B_0 + B_1X)^2 - (B_0 + B_1X)^2 = 0. I dont know what im doing wrong.
@Stats4EveryoneАй бұрын
The mistake is in the calculation of E(Y_i * Yhat_i)... E(X)^2 is not the same thing as E(X^2). This is a very good question though, and something that I can add a new video on soon. Thanks for the post. I will respond with more details regarding the calculation of the covariance between Y_i and Yhat_i.
@ErrorNotFound-nl1shАй бұрын
Amazing video! This completely cleared up my concept for ANOVA. and the $ trick is SO useful, i needed it, thank you!!
@Stats4EveryoneАй бұрын
Awesome! Thanks for the feedback. I am happy to hear that you found this video to be helpful :-)
@mayankmishra76072 ай бұрын
best channel for stats... thanks for posting
@Stats4EveryoneАй бұрын
Happy to hear that you are enjoying this content :-) Thanks!
@HamiltonHammy-g6w2 ай бұрын
Brilliant! Thank you so much for the playlist so far. This video really helps link the prior more general theory to a more concrete example. However, I was wondering if you would also be showing how the variances of the betas are estimated for this model in this framework, and also whether a similar video would be possible where you focus on illustrating this framework for a GLM where the link isn't the identity and the weights aren't 1, so it's clearer how those more flexible parts work when "actually needed"?
@Stats4EveryoneАй бұрын
Thanks so much for this comment!! The holidays are slowing me down a bit with adding more to this playlist, though my next few videos will be a Bernoulli distribution (Yes/No, or 0,1, outcomes) example, where the weights will not be 1. I will also use R to provide an example for how to calculate the estimates of beta. Regarding estimating the variance for the beta estimates - this is a very good point - I have not read ahead yet on the Nelder paper to know if they cover this topic (I would hope that they do…) However, if they don’t, I will definitely add a video at some point on this - estimating beta is not very useful, unless we also have an idea about its variance.
@Stats4EveryoneАй бұрын
Here are the links to the bernoulli example: Step 1 (Show Bernoulli is from Exponential family): kzbin.info/www/bejne/bnyte5WboJh8bMk Step 2 (find the link function and the weights and y for weighted Least Squares): kzbin.info/www/bejne/l5zCZJd5ZdV9m9U Step 3 (wrap up and program everything in R to show that you get the same thing as their GLM function): kzbin.info/www/bejne/onmsfKKCiq2td80
@HamiltonHammy-g6w23 күн бұрын
@@Stats4Everyone I hope you had a lovely holiday and happy New Year! The new videos are fantastic. You're a stats star. Especially showing how you can implement it in R. If it's possible to eventually add a video/videos on estimating the variances that would be amazing. You could obviously use bootstrapping to estimate confidence intervals (or permutation methods to estimate p-values, if you really care about them), but it would be great to see where the analytical SEs come from.
@Stats4Everyone20 күн бұрын
@@HamiltonHammy-g6w yup. That is definitely on my radar for another topic to cover in this glm playlist :)
@keltoumahmide39162 ай бұрын
how do we calculate ignificance f 5
@ThowathDakBeliewYar-ds8kg2 ай бұрын
this is an impactful
@EzraJeremiah-cl7ub2 ай бұрын
Am impressed of you
@yxsubhamgaming45012 ай бұрын
Miss na au kn
@GetahunMulu-k3z2 ай бұрын
The best reaching system to continue
@HamiltonHammy-g6w2 ай бұрын
Fantastic videos and series as always. One quick question: why is it okay in this proof (and subsequent videos) to just use the log of the density function and not the "full" likelihood function? I understand how the density function can be used as a likelihood function for the case when you have just one observation/z-value, but the "full" likelihood function is based on the product of n PDFs and would be different? What am I not getting here? Thanks.
@Stats4Everyone2 ай бұрын
Excellent question!!! I am happy someone asked about this! I made a video to respond: kzbin.info/www/bejne/fIPYd2l9dq51mrM. Please let me know if you have any follow-up questions.
@HamiltonHammy-g6w2 ай бұрын
@@Stats4Everyone Thank you that's very clear (pretty simply really as you say). Hopefully now anyone else like me who is not smart enough or is too lazy to work this out can understand!
@Stats4Everyone2 ай бұрын
@@HamiltonHammy-g6w Glad it makes sense now :-) It really is something that bothers me a little too... its kinda lazy notation, and that's okay... most people, including myself, try to use lazy notation when possible
@charleslevine94822 ай бұрын
This was such a great video! Thank you!
@Brendan-jh1lu2 ай бұрын
And very sexy voice too! 😊
@Brendan-jh1lu2 ай бұрын
Very helpful video!
@lambdacalculus33852 ай бұрын
thank you your tutorials are always helping me 😊
@burger_kinghorn2 ай бұрын
What program are you using for the virtual blackboard?
@burger_kinghorn2 ай бұрын
You might think of y = β0 + β1•x1 + β2•x2 + ... + e But it's better to see it as x1•β1 + x2•β2 + ... + e The *X* matrix is like our spreadsheet so that order is necessary for the dimensions to line up in the matrix multiplication. It's a bunch of known constants acting as the coefficients in a system of equations. Similar to the matrix equation *Ax* = *b* It's *Xβ* = *y* *β* is the variable vector transformed by *X*. Regression is about a linear combination of the β's. Given *Y* = *XB* + *e* E(*Y* | *X*) = *XB* + 0 The error term averages out to 0, i.e. we regress back to the (conditional) mean of Y. The product of a vector with its transpose collapses into the sum of its squared elements. *x’x* = Σx.i ² Variance is the average of squared deviations. (*x* - μ)’(*x* - μ) = Σi(x.i - μ)² Divide that by n for σ² , by n-1 for s². Similarly Cov(x1, x2) = Σi(x.i1 - μ1)(x.i2 - μ2) / n σ1,2= (*x1* - μ1)’(*x2* - μ2) / n Generalize it to (*X* - *M*)’(*X* - *M*) / n If the variables were mean centered first their means are 0. Therefore *M* = *0* and the covariance matrix is *X’X* divided by n or n-1.