Maximum Likelihood estimation - an introduction part 2

  Рет қаралды 261,882

Ben Lambert

Ben Lambert

Күн бұрын

Пікірлер: 65
@Yustiks
@Yustiks 6 жыл бұрын
there is a mistake at 6:38: should multiply by N the left part
@albertogutierrez346
@albertogutierrez346 4 жыл бұрын
I can't find the words to thank you Ben, you have no idea of how many times your videos have saved me along graduate school. The simplicity and detail of how you explain is priceless! I owe you big time!! Best wishes from Mexico.
@SatyaRanjanSahoowmu
@SatyaRanjanSahoowmu 9 жыл бұрын
I think the last equation should be logP(N*Xbar) instead of just logP(Xbar)
@Partho2525
@Partho2525 10 жыл бұрын
you are really really good...I never understood this concept this clear.
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi Prektesh, glad to hear it was useful! Best, Ben
@sami-samim
@sami-samim 8 жыл бұрын
I love the way you recap previous lectures.
@internetscheisse
@internetscheisse 8 жыл бұрын
Thanks alot for this helpful video! But still I got a question. In the last line, shouldn't it be N*mean(x) instead of just mean(x)?
@SpartacanUsuals
@SpartacanUsuals 8 жыл бұрын
Hi, thanks for your message - yep you are right - there should be an 'N' before the last mean(x). My apologies for the confusion! Best, Ben
@YawarSohail
@YawarSohail 7 жыл бұрын
Oh I was wondering what happened to N. Thanks he asked it.
@abokbeer
@abokbeer 8 жыл бұрын
You are my statistics hero. I shared this with all my colleagues.
@horsepowerfactorytx
@horsepowerfactorytx 11 жыл бұрын
I want to thank you for such a great explanation. My professor does not teach half as good as you. Cheers.
@jacquelineoverton8223
@jacquelineoverton8223 8 жыл бұрын
Beautiful explanation of log likelihood! Thank you. This is tremendously helpful!
@zeeshan008x52
@zeeshan008x52 5 жыл бұрын
Rockstar of statistics. U teaching style is very good sir
@notyouraveragecat97
@notyouraveragecat97 Жыл бұрын
This explanation is beyond amazing. Thank you so much.
@elenae1366
@elenae1366 5 жыл бұрын
Great explanation of ML! This whole playlist is very helpful for understanding the intuition behind these concepts, especially since you show the mathematical derivations clearly. Thank you for posting!
@tsasa192
@tsasa192 10 жыл бұрын
watched many videos and so confused. got it know. yours is the best!
@N.E.Olsson
@N.E.Olsson 8 жыл бұрын
Thanks Ben! The lecturerers tend to make it more complicated than it is when they skip this kind of thorough explanation. Big thanks, it is very clear now! // Niklas
@Simbwhaha
@Simbwhaha 6 жыл бұрын
In my eyes, you obtained “hero status” !
@charlesledesma305
@charlesledesma305 4 жыл бұрын
Excellent explanation!
@ArtisticContingent
@ArtisticContingent 8 жыл бұрын
Audio needs boosting but this was a very helpful video :) thanks for making it!
@kikikikikiki7859
@kikikikikiki7859 6 жыл бұрын
This is really helpful and it really makes sense to me!
@akjagsdfiausd
@akjagsdfiausd 10 жыл бұрын
In the end shouldn't it be log(p)Nx(bar) + log(1-p)(N - Nx(bar)) rather than log(p)x(bar) + log(1-p)(N - Nx(bar))? Great video, nonetheless!
@zangasenor
@zangasenor 10 жыл бұрын
The uploader corrected the mistake through annotations at 6:35
@kamikaze9271
@kamikaze9271 7 жыл бұрын
yeah...since the background is black, it's kind of hard to see the annontation..
@indragesink
@indragesink 9 жыл бұрын
you can also rewrite L as p^(sumfromi=1ton(xi))*(1-p)^(n-sumfromi=1ton(xi)), which seems (a lot) easy(/ier) but I guess you'll still need the product rule because of p*(1-p) (in the basis).
@SC-hu2rt
@SC-hu2rt 7 жыл бұрын
Thanks a lot for your explanation! Really helpful and encouraging
@chriskiguongo138
@chriskiguongo138 8 жыл бұрын
Saved me on the night before my statistics exam \
@zarakkhan72
@zarakkhan72 10 жыл бұрын
A very nice explanation. Understood it completely but except for the final line. The same point raised by "dlambriex" in the comments, is my question as well.
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Thanks for your message. Yes, you were correct. That was a mistake. I have now added a note to the video. Best, Ben
@sardaou3944
@sardaou3944 3 жыл бұрын
is it possible sir to explain in more detail the fact that differentiate and taking the log leads to the same conclusion Thanks in advance
@alandubackupchannel5201
@alandubackupchannel5201 8 жыл бұрын
Have you considered the case where X1=M, X2=F, but your sample is X1=F, X2=M, the samples are the same but the likelihood L function doesn't take that into account? Sorry if I'm wrong, I have just began learning this. So, the likelihood estimator is just the chance that you get a particular sample (where the ordering *does* matter)?
@ScottEdwards2000
@ScottEdwards2000 3 жыл бұрын
I was thinking the same thing! Yeah it seems like this is calculating the probability of getting this same in this exact order of M's and F's. Is that right?
@rosamaytapalacios
@rosamaytapalacios 9 жыл бұрын
Thanks for the explanation. I have a doubt,in which case is it used negative log?
@dheerajsharma5279
@dheerajsharma5279 7 жыл бұрын
Hi Ben, Amazing video. Can I please ask if thats nX (bar) in the end.
@carolinaxlopez7495
@carolinaxlopez7495 10 жыл бұрын
thank you so much for your great videos! i really loved them all! very clrear and simple way to get complicated stuffs :)
@abhilamsal7185
@abhilamsal7185 7 жыл бұрын
Setting fist derivative to zero, might give Maximum or Minimum. So, In my opinion further steps are required like setting second derivative less than zero.
@comvnche
@comvnche 3 жыл бұрын
or even a local max or min
@danielbhc
@danielbhc 6 жыл бұрын
Thanks a lot! I'm just a little confused. p hat would be an estimator of how many males are in the population. But for a more general MLE understanding, is p hat just the parameter that makes the sample data fit the PDF f(x|p) where p is the unknown parameter? So for example, could you do MLE for a Poisson distribution where the unknown parameter p would be the mean lambda? Could a Possion MLE still be described by f(x|p)? Thanks!
@lmgpc
@lmgpc 6 жыл бұрын
You are great! really helpful
@ashankavindu2409
@ashankavindu2409 3 жыл бұрын
Thank you very much sir ♥
@dlambriex
@dlambriex 11 жыл бұрын
First of all thank you for your videos! They have helped a lot with studying for my econometrics courses! But I have a question regarding the last line in your last simplification. You state that the summation of Xi is equal to N * Xaverage (in yellow on the right). Yet when you erase sum of Xi in the formula you only write Xaverage and not N*Xaverage. While for the second summation you do simplify the that sum of (1-Xi) = N(1-Xaverage). Could you please confirm that this should in fact be N * Xaverage? Thanks ever so much in advance and keep up the good work it is really a lifesaver for me and my fellow students in the Netherlands!
@zarakkhan72
@zarakkhan72 10 жыл бұрын
Yup i also didn't get that part.
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, Thanks for your messages, and kind words. Yes, it should be N times Xaverage; my apologies for the mistake. I shall add a note to the video now. Best, Ben
@guilhermearaujo5868
@guilhermearaujo5868 4 жыл бұрын
But how can I be sure that I am maximizing the function? The turning point could be a maximum or minimum, no? The likelihood function doesn't have a minimum ( I imagine it does and its 0), or its minimum is not a turning point?
@regularviewer1682
@regularviewer1682 6 жыл бұрын
How do you get N(1-xbar) at the very end? I can't figure it out for the life of me, even though I know it's relatively simple!
@igorrizhyi5572
@igorrizhyi5572 6 жыл бұрын
i did't get why we can so easily wrap this all into log without worrying about that it will be completely different expression? i see how it makes our calculations much easier but do you have some sources where i can get material about this log function wrapping? thanks
@igorrizhyi5572
@igorrizhyi5572 6 жыл бұрын
ok, i think i got it. if somebody will stack into the same problem of understanding this material i can recommend this url - en.wikipedia.org/wiki/Monotonic_function
@jeongsungmin2023
@jeongsungmin2023 Жыл бұрын
In 6:50 shouldn’t sigma Xi equal N*(X bar)?
@adelinesou8899
@adelinesou8899 5 жыл бұрын
This is so helpful thank you!
@leojboby
@leojboby 7 жыл бұрын
if log is always increasing, when is the derivative 0?
@chandnibhudia624
@chandnibhudia624 10 жыл бұрын
you are amazing!!
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, thanks for your message and kind words. Glad to hear it was helpful! Best, Ben
@khalidmzd
@khalidmzd 10 жыл бұрын
Excellent
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, thanks for your message, and kind words. Glad to hear that it was useful. Best, Ben
@anthonymatovu6169
@anthonymatovu6169 7 жыл бұрын
Thank you very much,
@endeshawhabte6667
@endeshawhabte6667 9 жыл бұрын
I MUST THANK YOU!!
@LovelyWorldFressia
@LovelyWorldFressia 4 жыл бұрын
Thank you so much! I was able to get it fast with this video. (Tried other methods for a day and didn't get it >.
@bend.4506
@bend.4506 6 жыл бұрын
thank you soooo much!!!!
@jansiranibdumtcs
@jansiranibdumtcs 5 жыл бұрын
The (unobserved) parameters are Θ = (µ1 ,…,µn , σ2) and the parameter space is Ѳ = Rn X R+. (a) Derive the maximum likelihood estimator(MLE) for Θ. Pls answer me for this.
@user-ck1hh9lh5s
@user-ck1hh9lh5s 5 жыл бұрын
Why did the xi become a xbar
@abbe23456789
@abbe23456789 5 жыл бұрын
It is an error, it is supposed to be N*x_bar
@lehmann_clips
@lehmann_clips 10 жыл бұрын
awesome
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, many thanks for your message and kind words. All the best with your studies, Ben
@hashimallawati2828
@hashimallawati2828 7 жыл бұрын
great
@MegaxAT
@MegaxAT 8 жыл бұрын
sehr nuzlich
Maximum Likelihood estimation - an introduction part 3
4:00
Ben Lambert
Рет қаралды 195 М.
Maximum Likelihood estimation - an introduction part 1
8:25
Ben Lambert
Рет қаралды 634 М.
The Singing Challenge #joker #Harriet Quinn
00:35
佐助与鸣人
Рет қаралды 39 МЛН
Haunted House 😰😨 LeoNata family #shorts
00:37
LeoNata Family
Рет қаралды 15 МЛН
Likelihood Estimation - THE MATH YOU SHOULD KNOW!
27:49
CodeEmporium
Рет қаралды 54 М.
Maximum Likelihood, clearly explained!!!
6:12
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 549 М.
The Fisher Information
17:28
Mutual Information
Рет қаралды 68 М.
Maximum Likelihood Estimation for the Normal Distribution
41:17
Samuel Cirrito-Prince
Рет қаралды 39 М.
Maximum Likelihood estimation of Logit and Probit
9:18
Ben Lambert
Рет қаралды 156 М.
Probability is not Likelihood. Find out why!!!
5:01
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Maximum Likelihood For the Normal Distribution, step-by-step!!!
19:50
StatQuest with Josh Starmer
Рет қаралды 555 М.