I'm so grateful for all the videos you make that inspire our curiosity!
@statswithbrian24 күн бұрын
Thank you! :)
@JuhiMaurya-ym3ud4 күн бұрын
perfect and easiest explanation in utube....thanku so much sir it is really helpful
@chrisolande1061Ай бұрын
Exactly the kind of video I was looking for, Perfect explanation.
@QuantNovice2 ай бұрын
That video is gold for every stats student! Thanks a lot for this amazing content!
@santiagodm34835 ай бұрын
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
@RoyalYoutube_PRO4 ай бұрын
Fantastic video... preparing for IIT JAM MS
@jayjain103323 күн бұрын
Soo good! Didn't get it in class at all
@ligandro4 ай бұрын
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
@LAQ24Ай бұрын
Brilliant!
@swatigoel5387Ай бұрын
Super helpful video! Thank you:)
@jayanthiSaibalaji3 ай бұрын
Many thanks 🙏
@ridwanwase74444 ай бұрын
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
@statswithbrian4 ай бұрын
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@ridwanwase74444 ай бұрын
@@statswithbrian Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
@statswithbrian4 ай бұрын
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.