I've never taken any regular basic statistics course and it takes literally a day to fully understand 1 lecture video. But as the instructor said, I feel much smarter after taking this lecture.
@kevinliu52993 жыл бұрын
I'd like to say, this video is the best I've ever seen. The instructor's mind is very clear so that he can relate all critical notions together and depicts vivid images for us in brief.
@seulebrg3 жыл бұрын
Method of Moments starts at 32:36
@syifaamustafa9543 жыл бұрын
Thank you !
@salmamorsi30033 жыл бұрын
Thanks
@jaspreetsingh-nr6gr Жыл бұрын
@ 19:20 , the dotted curve represents our ESTIMATOR for KL (theta, theta*) where as the solid line is the actual KL (theta, theta*) , the values theta and theta* are the minimum points of the estimator and the actual KL divergence resply. Can you guys help me verify if i understood correctly? Is the dotted line something else? or dis i interpret the solid line incorrectly? please help me out here..
@x.tzhang7629 Жыл бұрын
Yes that is what I understood as well. The point of him drawing these two lines was basically to illustrate if you have a very flat base, then even if you somehow managed to find the min of the estimator, there is still a chance that you being pretty far away from the actually parameter theta star.
@Marteenez_2 жыл бұрын
@28:50:00 why would there be a square root 2 pi there, I don't get the significance of what he is saying when there are no fudge factors and this is the true asymptotic variance. Why would there be any of that?
@adamzielinski28484 жыл бұрын
In 41:23 he says that it's actually enough to look only at the terms of the form X to the k-th - why is it enough?
@owenmireles96154 жыл бұрын
Hi, Adam. I hope this answer suits you well. The reason terms of the form X^k suffice is "linearity". The operation of taking an average is linear, meaning you can take out the constants. It is the same reason why constants can "escape" an integral. If E is the expectation, and there's a polynomial a_0 + a_1 X + a_2 X^2 + ... + a_n X^n, its expectation is E ( a_0 + a_1 X + a_2 X^2 + ... + a_n X^n ) = a_0 + a_1 E( X ) + a_2 E ( X^2 ) + ... + a_n E ( X^n ).
@adamzielinski28484 жыл бұрын
@@owenmireles9615 Ah that's right indeed, thank you!
@jaspreetsingh-nr6gr Жыл бұрын
@@owenmireles9615 @ 19:20 , the dotted curve represents our ESTIMATOR for KL (theta, theta*) where as the solid line is the actual KL (theta, theta*) , the values theta and theta* are the minimum points of the estimator and the actual KL divergence resply. Can you guys help me verify if i understood correctly? Is the dotted line something else? or dis i interpret the solid line incorrectly? please help me out here..
@owenmireles9615 Жыл бұрын
@@jaspreetsingh-nr6gr Hi, Jaspreet. Your interpretation seems correct. I'll just emphasize some parts which I think weren't covered as in much detail in the lecture. That's right, the dotted line represents the estimator for the KL divergence. However, the relationship between theta and theta* is more subtle... there's a bit more going on. Throughout the video, they mention that theta* is the true parameter that you're trying to find. To do this, you'd like to minimize a function. That function would be f(X) = KL(P_theta*, P_X). In words, you want to find the parameter X that is the "closest" (under KL divergence) to theta*. The graph of this f(X) is the solid line in the video. If you had perfect information, then obviously theta* is such minimizer. However, under real-world conditions, you never have perfect data, and have to resort to an approximation, that being Hat(KL). So, what you're actually trying to minimize now is g(X) = Hat(KL) (P_theta*, P_X). The graph of this g(X) is the dotted line in the video.
@jaspreetsingh-nr6gr Жыл бұрын
@@owenmireles9615 Understood, using data (for sample mean) and then the guarantees given by LLN and Continuous functions under LLN ensures hat(KL) reasonably approximates KL divergence--thanks Owen, will ping u again if i get stuck on subsequent lectures.
@ojichimezie86272 ай бұрын
I need the full lectures on this subject
@brandomiranda67033 жыл бұрын
how does his theorem in 30:55 mean that the MLE just going to be an average?
@brandomiranda67033 жыл бұрын
how is the fisher information used in modern machine learning - especially in practice?
@visualAnalyticsVA4 жыл бұрын
46:29 one to the last row in the matrix left side should be x_1^(r1-1), x_2^(r1-1), etc. instead of r-1
@danielyin30436 жыл бұрын
17:10 The word is his name Rigollet in French
@ogusqiu6926 Жыл бұрын
22:50
@joyprokash40132 жыл бұрын
Thank you very much.
@brandomiranda67033 жыл бұрын
Would have been nice to put in the description or title or somewhere that this lecture focus on Fisher Information (Matrix) - to make it easier to search...I honestly don't know how or why I found this...especially since it was at the bottom of my search results. MIT videos that are relevant should be at the top...
@Noah-jz3gt Жыл бұрын
41:04 - moment : expectation of power
@MrCraber7 жыл бұрын
Fisher proof is awesome!
@chtibareda3 жыл бұрын
what does support of P tetha means please?
@not_amanullah26 күн бұрын
thanks ♥️🤍
@edulgl6 жыл бұрын
This is way too advanced for me. I can understand the calculus but when he starts talking about convergence in probability and distribution, i get really lost. Can anyone point me to a book where i can get a better understanding on these topics of inference and convergence?
@Harihar_Patel6 жыл бұрын
asymptotic theory?
@conorigoe12136 жыл бұрын
www.stat.cmu.edu/~siva/705/lec4.pdf www.stat.cmu.edu/~siva/705/lec5.pdf www.stat.cmu.edu/~siva/705/lec6.pdf I found these helpful!
@SrEstroncio6 жыл бұрын
Try Wasserman's "All of Statistics" its pretty concise and straightforward, and designed for people coming in from other fields.
@edmonda.97485 жыл бұрын
@@SrEstroncio so true, i was gonna say same thing, it explains them very well and in detail
@gouravbhattacharya26943 жыл бұрын
I have now a clear idea of Fisher
@pranishramteke76424 жыл бұрын
That was a harry potter on broom entry!
@yd_2 жыл бұрын
What a doozy. Great lecture.
@rainywusc30514 жыл бұрын
damn it‘s hard
@caunesandrew14764 жыл бұрын
He's so bad at cleaning the board omg
@imtryinghere14 жыл бұрын
He has a broken leg and MIT has staff that come in and clean after each lecture.
@HojuneKim9143 жыл бұрын
@@imtryinghere1 I honestly think it's more the eraser than his lack of skill
@saubaral4 жыл бұрын
i hate when teachers be like : who does not know this: and then go and read about it. LOL