30: Maximum likelihood estimation

  Рет қаралды 123,179

Matthew E. Clapham

Matthew E. Clapham

Күн бұрын

Пікірлер: 39
@CharlotteC1990
@CharlotteC1990 8 жыл бұрын
Best MLE video on KZbin! Thank you :)
@hojkoff
@hojkoff 4 жыл бұрын
Great explanation. Clear, structured and explained in simple understandable terms. Thanks for taking the time to put this together.
@argentina2152
@argentina2152 7 жыл бұрын
Thank you so much. Now it all makes sense. I had difficulty grasping the idea of MLE, but with your explanation I feel confident going back to the lectures and being able to follow them.
@FloppyDobbys
@FloppyDobbys 7 жыл бұрын
Most clear explanation i have seen on KZbin thus far
@CC-op3ez
@CC-op3ez 2 күн бұрын
Thank you so much for your video, especially for warnings about model selection and AIC. Would you please explain more (or give me some documents or references) about "do not combine model selection with hypothesis testing. The p value significance will be inflated because you are implicitly testing multiple hypotheses with model selection"
@shane1146
@shane1146 8 жыл бұрын
Matthew you are awesome. I wish you did a video on Bayesian too. Bayesian, MCMC one please??
@stevegyro1
@stevegyro1 6 жыл бұрын
Matthew you are outstanding as a teacher. Thank you for the many insights and teaching. -Steve G.
@alonsobentley3351
@alonsobentley3351 3 жыл бұрын
You all probably dont care at all but does any of you know of a way to log back into an instagram account?? I was stupid lost my password. I love any tips you can offer me
@Maha_s1999
@Maha_s1999 6 жыл бұрын
Fantastic! Thank you so much for this super clear exposition.
@franciscomendoza3778
@franciscomendoza3778 2 жыл бұрын
Really nice presentation
@مصطفىعبدالجبارجداح
@مصطفىعبدالجبارجداح 3 жыл бұрын
بارك الله فيكم وجزاكم الله خير الجزاء
@sukursukur3617
@sukursukur3617 2 жыл бұрын
Yeaahh thats the clear explanation
@saurabh75prakash
@saurabh75prakash 6 жыл бұрын
Nicely explained, thanks!
@elliott8175
@elliott8175 2 жыл бұрын
Thank you so much!! Such clear explanations!!
@RPDBY
@RPDBY 6 жыл бұрын
AIC - the lower the better, LL - the higher the better, but both measure the same concept, so using both is a redundancy, one will suffice (as one will always go down when the other goes up judging by the formula). Did i get it right?
@profmo
@profmo 8 жыл бұрын
Thank you for taking the time to make this video.
@joseluisredondogarcia5244
@joseluisredondogarcia5244 8 жыл бұрын
The last slide is gold
@joelmhaske8185
@joelmhaske8185 4 жыл бұрын
Seriously good!
@PedroRibeiro-zs5go
@PedroRibeiro-zs5go 7 жыл бұрын
Great video man! Helped me a lot, all the best :D
@tag_of_frank
@tag_of_frank 6 жыл бұрын
isnt the (probability of x given theta) = (probability of theta given x)(probability x)/(probability theta) If this is the case, the "likelihood function" as you defined, is it really equal to the probability of x given theta ? If so, why, since it is missing those two terms extra terms?
@cube2fox
@cube2fox 6 жыл бұрын
Fahraynk I have the same question, please tell me when you found an answer.
@MM-du7je
@MM-du7je 6 жыл бұрын
That is true in a Bayesian setting, where the parameters in a given model are treated as being random. The point of maximum likelihood estimation, on the other hand, and Frequentist inference is we treat parameters as being static, such that we can estimate them. I wish the author had written in the more common notation for density for MLE, with f(x ; theta) instead of f(x given theta), it's confusing when this isn't known. Btw you are correct, so Bayes theorem works for densities too! p(theta) is the density of the parameter, p(theta given x) is the parameter conditioned on the data (the thing we want!) and p(x) the normalizing constant. Bayesian inference is basically the science of picking a prior based on objective/subjective mathematical means.
@roma9026
@roma9026 7 жыл бұрын
Thank you very much for your introduction!
@golamwahid8630
@golamwahid8630 5 жыл бұрын
Thank you very much!
@taded7169
@taded7169 8 жыл бұрын
Really it is very interesting!! Thank You!!
@ndiegow1
@ndiegow1 8 жыл бұрын
Amazing, I finally understood MLE
@StephenRoseDuo
@StephenRoseDuo 7 жыл бұрын
How does restricted maximum likelihood estimation change the description here?
@hounamao7140
@hounamao7140 8 жыл бұрын
you're amazing sir
@RafaelLima-ox9ul
@RafaelLima-ox9ul 8 жыл бұрын
cool! Good job!
@ivarockazi
@ivarockazi 7 жыл бұрын
nice explanation...but i ended up cleaning my laptop screen.. after 4.49
@clairekunesh4637
@clairekunesh4637 8 жыл бұрын
Very helpful
@mnkmnkification
@mnkmnkification 7 жыл бұрын
AWESOME!
@sandrahuhu7429
@sandrahuhu7429 7 жыл бұрын
this is great!
@rawiyahalraddadi7064
@rawiyahalraddadi7064 6 жыл бұрын
Thank you!
@azadalmasov5849
@azadalmasov5849 7 жыл бұрын
It is interesting to me why they just do not divide AIC eqn. by 2.
@aravindhan9368
@aravindhan9368 7 жыл бұрын
Thankyou sir. Its very helpful. Can you please show the mathematical workout of this in figures?
@usmansaeed678
@usmansaeed678 7 жыл бұрын
thanks
@looploop6612
@looploop6612 7 жыл бұрын
Is likelihood same as probability?
Maximum Likelihood Estimation ... MADE EASY!!!
9:12
Learn Statistics with Brian
Рет қаралды 21 М.
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 28 МЛН
Will A Basketball Boat Hold My Weight?
00:30
MrBeast
Рет қаралды 153 МЛН
Wait for it 😂
00:19
ILYA BORZOV
Рет қаралды 11 МЛН
Trick-or-Treating in a Rush. Part 2
00:37
Daniel LaBelle
Рет қаралды 42 МЛН
Likelihood Estimation - THE MATH YOU SHOULD KNOW!
27:49
CodeEmporium
Рет қаралды 54 М.
Probability vs. Likelihood ... MADE EASY!!!
7:31
Learn Statistics with Brian
Рет қаралды 36 М.
Maximum Likelihood, clearly explained!!!
6:12
StatQuest with Josh Starmer
Рет қаралды 1,4 МЛН
Maximum Likelihood : Data Science Concepts
20:45
ritvikmath
Рет қаралды 37 М.
PB65: Maximum A Posteriori (MAP) Estimation
9:34
Rich Radke
Рет қаралды 21 М.
Linear mixed effects models
18:37
Matthew E. Clapham
Рет қаралды 229 М.
Model selection with AIC and AICc
13:21
TileStats
Рет қаралды 12 М.
Maximum Likelihood Estimation and Bayesian Estimation
11:30
Barry Van Veen
Рет қаралды 127 М.
26: Resampling methods (bootstrapping)
9:40
Matthew E. Clapham
Рет қаралды 147 М.
ЛУЧШИЙ ФОКУС + секрет! #shorts
00:12
Роман Magic
Рет қаралды 28 МЛН