That video is gold for every stats student! Thanks a lot for this amazing content!
@ligandro3 ай бұрын
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
@santiagodm34833 ай бұрын
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
@RoyalYoutube_PRO2 ай бұрын
Fantastic video... preparing for IIT JAM MS
@jayanthiSaibalajiАй бұрын
Many thanks 🙏
@ridwanwase74443 ай бұрын
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
@statswithbrian3 ай бұрын
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@ridwanwase74443 ай бұрын
@@statswithbrian Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
@statswithbrian3 ай бұрын
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.