The Cramer-Rao Lower Bound ... MADE EASY!!!

  Рет қаралды 5,543

Learn Statistics with Brian

Learn Statistics with Brian

Күн бұрын

Пікірлер: 16
@huanranchen
@huanranchen 2 ай бұрын
I'm so grateful for all the videos you make that inspire our curiosity!
@statswithbrian
@statswithbrian 2 ай бұрын
Thank you! :)
@JuhiMaurya-ym3ud
@JuhiMaurya-ym3ud Ай бұрын
perfect and easiest explanation in utube....thanku so much sir it is really helpful
@chrisolande1061
@chrisolande1061 2 ай бұрын
Exactly the kind of video I was looking for, Perfect explanation.
@QuantNovice
@QuantNovice 3 ай бұрын
That video is gold for every stats student! Thanks a lot for this amazing content!
@santiagodm3483
@santiagodm3483 6 ай бұрын
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
@ligandro
@ligandro 6 ай бұрын
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
@swatigoel5387
@swatigoel5387 2 ай бұрын
Super helpful video! Thank you:)
@RoyalYoutube_PRO
@RoyalYoutube_PRO 5 ай бұрын
Fantastic video... preparing for IIT JAM MS
@jayanthiSaibalaji
@jayanthiSaibalaji 4 ай бұрын
Many thanks 🙏
@LAQ24
@LAQ24 2 ай бұрын
Brilliant!
@ridwanwase7444
@ridwanwase7444 6 ай бұрын
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
@statswithbrian
@statswithbrian 6 ай бұрын
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@ridwanwase7444
@ridwanwase7444 6 ай бұрын
@@statswithbrian Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
@statswithbrian
@statswithbrian 6 ай бұрын
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.
@jayjain1033
@jayjain1033 2 ай бұрын
Soo good! Didn't get it in class at all
The Rao-Blackwell Theorem Explained
15:12
Learn Statistics with Brian
Рет қаралды 2,4 М.
Probability vs. Likelihood ... MADE EASY!!!
7:31
Learn Statistics with Brian
Рет қаралды 41 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН
How to treat Acne💉
00:31
ISSEI / いっせい
Рет қаралды 108 МЛН
Мясо вегана? 🧐 @Whatthefshow
01:01
История одного вокалиста
Рет қаралды 7 МЛН
Maximum Likelihood - Cramer Rao Lower Bound Intuition
8:00
Ben Lambert
Рет қаралды 132 М.
Cramer Rao Inequality
22:02
Hopefully Helpful Mathematics Videos
Рет қаралды 9 М.
The Cramer-Rao Inequality
10:15
Mike, the Mathematician
Рет қаралды 1,1 М.
Maximum Likelihood Estimation ... MADE EASY!!!
9:12
Learn Statistics with Brian
Рет қаралды 30 М.
Sufficient Statistics and the Factorization Theorem
15:19
Learn Statistics with Brian
Рет қаралды 11 М.
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
6:12
Learn Statistics with Brian
Рет қаралды 18 М.
Try this prank with your friends 😂 @karina-kola
00:18
Andrey Grechka
Рет қаралды 9 МЛН