The Cramer-Rao Lower Bound ... MADE EASY!!!

  Рет қаралды 2,293

Learn Statistics with Brian

Learn Statistics with Brian

Күн бұрын

Пікірлер: 9
@QuantNovice
@QuantNovice 8 күн бұрын
That video is gold for every stats student! Thanks a lot for this amazing content!
@ligandro
@ligandro 3 ай бұрын
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
@santiagodm3483
@santiagodm3483 3 ай бұрын
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
@RoyalYoutube_PRO
@RoyalYoutube_PRO 2 ай бұрын
Fantastic video... preparing for IIT JAM MS
@jayanthiSaibalaji
@jayanthiSaibalaji Ай бұрын
Many thanks 🙏
@ridwanwase7444
@ridwanwase7444 3 ай бұрын
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
@statswithbrian
@statswithbrian 3 ай бұрын
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@ridwanwase7444
@ridwanwase7444 3 ай бұрын
@@statswithbrian Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
@statswithbrian
@statswithbrian 3 ай бұрын
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.
The Rao-Blackwell Theorem Explained
15:12
Learn Statistics with Brian
Рет қаралды 376
Maximum Likelihood - Cramer Rao Lower Bound Intuition
8:00
Ben Lambert
Рет қаралды 130 М.
Новый уровень твоей сосиски
00:33
Кушать Хочу
Рет қаралды 5 МЛН
Win This Dodgeball Game or DIE…
00:36
Alan Chikin Chow
Рет қаралды 38 МЛН
The Method of Moments ... Made Easy!
9:02
Learn Statistics with Brian
Рет қаралды 13 М.
Cramer Rao Inequality
22:02
Hopefully Helpful Mathematics Videos
Рет қаралды 9 М.
method of moments in statistics
11:02
CONTENT-ACADEMY
Рет қаралды 371
Sufficient Statistics and the Factorization Theorem
15:19
Learn Statistics with Brian
Рет қаралды 6 М.
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
6:12
Learn Statistics with Brian
Рет қаралды 11 М.
The most important skill in statistics
13:35
Very Normal
Рет қаралды 318 М.
Maximum Likelihood Estimation ... MADE EASY!!!
9:12
Learn Statistics with Brian
Рет қаралды 15 М.
What are confidence intervals? Actually.
24:03
zedstatistics
Рет қаралды 144 М.
Independent vs Mutually Exclusive Events ... MADE EASY!!!
8:32
Learn Statistics with Brian
Рет қаралды 1 М.
How to choose an appropriate statistical test
18:36
TileStats
Рет қаралды 136 М.
Новый уровень твоей сосиски
00:33
Кушать Хочу
Рет қаралды 5 МЛН