Maximum Likelihood estimation - an introduction part 3

  Рет қаралды 195,525

Ben Lambert

Ben Lambert

Күн бұрын

Пікірлер: 94
@johncase9047
@johncase9047 8 жыл бұрын
I love twist endings.
@Alumiss682
@Alumiss682 7 жыл бұрын
Nearly 20mins worth of derivation only to find out that the estimator = sample mean. #econometrics
@daniz0rz
@daniz0rz 6 жыл бұрын
Right... I have an exam on Thursday for a categorical data class and this was never made clear in that class. And he wants us to be able to do this.... I was hoping for more practical applications (I'm an epidemiology major) than theory and derivatives. It's also quite likely I saw a ton of math and blacked out... XD
@TheBjjninja
@TheBjjninja 4 жыл бұрын
I had a chuckle at the very end also.
@mihaililiev5932
@mihaililiev5932 4 жыл бұрын
LOL, I thought the same. Beautiful derivation but what's the point? But then... there must be something more to the MLE, right? Can Ben perhpas expand a bit more on why is MLE useful?
@hamza_ME_
@hamza_ME_ 4 жыл бұрын
@@mihaililiev5932 ​ I'm learning Artificial intelligence and I can tell you MLE is very useful in forming and working with some algorithms of Machine Learning...
@lastua8562
@lastua8562 4 жыл бұрын
@@hamza_ME_ Good to know! I am starting a research master in economics and consider moving into AI later in case I should dislike writing research papers. Aside from logistic regression, are there any econometrics topics you are particularly recommending to learn for AI?
@danthemangoman5931
@danthemangoman5931 6 жыл бұрын
One hour of college class with 2 hours of tutorials and barely understanding what the simple terms mean to having a full understanding in this. You my friend, is a saint.
@cliqclaq508
@cliqclaq508 6 жыл бұрын
I can't be the only one who loves it when a complicated function gets canceled out till it's really short. It's like those oddly satisfying videos.
@pkoomson1
@pkoomson1 Жыл бұрын
I am happy i did followed this 3-part series on estimation of ML. Thank you Ben for making this clearer to me.
@axelboberg9374
@axelboberg9374 Жыл бұрын
in 10 min you explained MLE better than my text book! Thank you.
@ovauandjahera8664
@ovauandjahera8664 Жыл бұрын
True
@mpgrewal00
@mpgrewal00 8 жыл бұрын
Many tutors on youtube fail to explain with a simple example. But you didn't do that, you are an Ace.
@TheTorridestCheese
@TheTorridestCheese 7 жыл бұрын
Thank you very much for these videos. My statistics professor just breezes by these concepts and I really needed a much slower explanation and you helped a ton.
@HankWank
@HankWank 9 жыл бұрын
You are a gentleman and a scholar.
@emielabrahams8475
@emielabrahams8475 5 жыл бұрын
Ben Lambert - these videos were awesome. Well explained - thank you kindly
@nature_through_my_lens
@nature_through_my_lens 4 жыл бұрын
Amazing. Got it in 15 mins... Watched all three videos at 1.25x. Thank you so much.
@fmp000
@fmp000 8 жыл бұрын
What a kick-ass video, Ben! You have no idea how much it helped me.
@MrHugosky1
@MrHugosky1 11 жыл бұрын
Great class! Thank you very much Mr. Lambert. Greetings from Mexico
@singhay_mle
@singhay_mle 8 жыл бұрын
Beautiful explanation, i saw like 5 videos none were able to explain like you did..thank you!
@ravideeplehri
@ravideeplehri 7 жыл бұрын
Thankyou for taking out time to post these wonderful intuitive video lectures
@deansfa
@deansfa 7 жыл бұрын
So well explained! Thank you very much for this three videos, it was very helpful!
@0blivisi0n
@0blivisi0n 10 жыл бұрын
You are a rockstar Ben! Greatly explained!
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, thanks for your message, and kind words. Glad to hear that it was useful. Best, Ben
@alandubackupchannel5201
@alandubackupchannel5201 8 жыл бұрын
Thanks for the video, great explanation. I wish lectures were this clear.
@Becca01223
@Becca01223 4 жыл бұрын
Ben, you're the best. Thank you!
@daattali
@daattali 8 жыл бұрын
Great 3-part video, so easy to understand (if you really pay attention and put some thinking into it), thank you!
@robpatty1811
@robpatty1811 2 жыл бұрын
Great video, thanks Ben.
@cnaroztosun9402
@cnaroztosun9402 8 жыл бұрын
the "i" you wrote at 3:20 is perfect omg
@askfskpsk
@askfskpsk 7 жыл бұрын
Great explanation. Thanks Ben.
@bend.4506
@bend.4506 6 жыл бұрын
Thank you! Enjoyed parts 1-3! VERY helpful!
@pavybez
@pavybez 8 жыл бұрын
Thank you for the clear and concise explanation.
@pandeyprince25
@pandeyprince25 6 жыл бұрын
an amazing explanation ! you are savior
@charlesledesma305
@charlesledesma305 4 жыл бұрын
Excellent explanation!
@seineyumnam4374
@seineyumnam4374 7 жыл бұрын
so why aren't college professors this good?
@barovierkevinallybose1040
@barovierkevinallybose1040 5 жыл бұрын
I think some prof like for you t figure it out on ur own
@lastua8562
@lastua8562 4 жыл бұрын
They are more advanced than Ben was at the time (before PhD). It is easier to teach when you know less and are closer to the students.
@adauche4875
@adauche4875 5 жыл бұрын
Awesome video; very easy to understand. I like your explanations so much. Please when you get a chance, can you make a video on Maximum a Posteriori Probability Estimation. Thank you
@burninmind
@burninmind 5 жыл бұрын
Awesome explanation! It was really helpful for me
@frogger832
@frogger832 8 жыл бұрын
sorry if this is a dumb question, but I am still confused as to why we need this. why go through all the math just to get x bar? are there any examples where we actually need to perform this and the estimator is not intuitive or listed somewhere?
@SpartacanUsuals
@SpartacanUsuals 8 жыл бұрын
+Froggy Hi, thanks for your message. There are quite a few examples where it is difficult to intuitively work out what might be a good estimator. For example, look at section 4 here: times.cs.uiuc.edu/course/410/note/mle.pdf Understanding how likelihoods work is also very important for Bayesian statistics, see the playlist here: kzbin.info/aero/PLFDbGp5YzjqXQ4oE4w9GVWdiokWB9gEpm Best, Ben
@sunnyhours84
@sunnyhours84 8 жыл бұрын
Hi Ben! I have a follow-up question! When I learn math I always try to fit the topic in a historical perspective. And seeing that the "method of moments" estimator also gives the exact same answer, namely 'x bar', one starts to wonder what came first?? Did people deal with these kinds of problem (ages ago) intuitively, or did they first work out all these different estimators only to find out that they produced intuitively answers? Who came up with e.g. "method of moments" and "maximum likelihood", and what did people use before? All I know is that the field of statistics is relatively young, compared to say Algebra and Analysis. (I haven't had time to read the pdf you linked to, maybe there is an answer there..)
@Muuip
@Muuip 8 жыл бұрын
Notabot I am trying as well to grasp the benefits of complex calculations trying to estimate probabilities based on sample values. If a sample is too small, any inference/estimate by complex calculations won't be accurate. Only a bigger sample will raise the accuracy.
@XxAbZzXx100
@XxAbZzXx100 9 жыл бұрын
You are the man! Thanks Ben.
@unclecode
@unclecode 2 жыл бұрын
Beautiful. Would you please share what hardware and software u r using for writing your lecture notes in these videos and recording ur screen? Thx
@tianshili7649
@tianshili7649 10 жыл бұрын
Hello, Ben! You made an excellent video about MLE and it really helps me understand the subject. But I have a question: what should I do if I don't know the distribution of the samples? For example, it's clear that in your case here it's a binary distribution. But what will you do to a population which you don't know what kind of distribution it has in advance?
@62294838
@62294838 2 жыл бұрын
You involved the asymptotic argument (Ie central limited theorem) for it
@62294838
@62294838 2 жыл бұрын
Some times , you will encounter something call the pseudo-true parameter. See Wooldrige cross sectional and Panel data textbook
@muhammadzhafranbahaman6401
@muhammadzhafranbahaman6401 9 жыл бұрын
Hi Ben, I would like to clarify is the P-hat is equivalent to tita-hat which is the point estimator for the parameter P which is simply pie(population proportion)?
@josemanuelromerogonzalez6084
@josemanuelromerogonzalez6084 2 жыл бұрын
Not all the heroes wear a coat. Thank you a lot!
@allall02
@allall02 6 жыл бұрын
Great explanation, thank you!
@muhammadzhafranbahaman6401
@muhammadzhafranbahaman6401 9 жыл бұрын
Hi Ben, For this example, is the random variable follows Binomial distribution?
@Muuip
@Muuip 8 жыл бұрын
Thanks for your presentation! At 2:40 you said "x bar times p hat times x bar" but wrote "x bar MINUS p hat times x bar". I assume "minus" is correct? Thank you.
@kingsleymichael7533
@kingsleymichael7533 6 жыл бұрын
This really helped me alot. Thank you very much.
@CharlieFreakin
@CharlieFreakin 10 жыл бұрын
Thank you! Very clear and understandable.
@leojboby
@leojboby 7 жыл бұрын
Does this work if x_i is not equal to 1 for male? This whole thing was to say that if we use MLE to determine the population parameter that defines the probability of Y in the population, that we should use the mean of whatever we obtain from the sample... However, it says nothing about N...? Are there other ways to determine such a population parameter?
@Robertianocockrell
@Robertianocockrell 6 жыл бұрын
do you have videos on method if moments for the gamma function and binomial?
@lastua8562
@lastua8562 4 жыл бұрын
So we are using Log as Ln here, why are they equivalent in this case?
@ZishanAnsari
@ZishanAnsari 10 жыл бұрын
Thanks for the great video Ben. So this means that the maximization function will always find the P_hat value to be same as the value of X_bar. For logistic regression model the probability cut-off that I get using Sensitivity and Specificity crossovers is always close to the value of X_bar. Is this the reason?
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, thanks for your message. Yes, the Maximum Likelihood estimator for the proportion of data which are of one type is always just the sample mean of the data in the binary case. It makes sense that the proportion of cases identified correctly as positive should mirror the actual value of X_bar heuristically. However, I'm not sure of a theorem etc. that would necessarily demonstrate this. Sorry I can't be of more help! Best, Ben
@ankurgangwar3622
@ankurgangwar3622 5 жыл бұрын
in the second part there was log on left hand side , where is the log here
@TheTehnigga
@TheTehnigga 4 жыл бұрын
You could have explained with a graphical example what is small l. That part was quite confusing to understand.
@kennzinga
@kennzinga 8 жыл бұрын
omg thank you so much. It helped a lot.
@GiuseppeRomagnuolo
@GiuseppeRomagnuolo 9 жыл бұрын
Thanks Ben for these videos, are incredibly useful. I assume that by log(p_hat) we are referring to the natural logarithm ln(p_hat) and not the log_base10(p_hat)?
@SpartacanUsuals
@SpartacanUsuals 9 жыл бұрын
+Giuseppe Romagnuolo Yes, that's correct. Glad to hear the videos have been useful. Best, Ben
@Connor0isdabest
@Connor0isdabest 9 жыл бұрын
does anyone know what video he proves why dL/dp = 0 is the same as dl/dp = 0?
@SpartacanUsuals
@SpartacanUsuals 9 жыл бұрын
Connor0isdabest It is this video: kzbin.info/www/bejne/mpXUn6xplr-Bhrs
@Connor0isdabest
@Connor0isdabest 9 жыл бұрын
so simple! thanks so much
@goldfishyzaza5770
@goldfishyzaza5770 10 жыл бұрын
I do not understand the differentiation part of the log likelihood function. Where did p-hat suddenly come in? Have you just substituted p for p-hat?
@SpartacanUsuals
@SpartacanUsuals 10 жыл бұрын
Hi, thanks for your message. After differentiating the log likelihood with respect to p, this leaves a function of p. However, to find the maximum likelihood estimator for p - which I call p hat - it is necessary to set this derivative to zero. This defines p-hat. I hope that helps. Best, Ben
@Jessica5587
@Jessica5587 9 жыл бұрын
Hi ! Great videos, thanks a lot ! There's just one thing I couldn't quite get: the minus at the end, I know you explained it, but I'm lacking some basics in maths and I can't understand where it's coming from, sorry: N X_bar / P_hat MINUS ... Would it be possible for you to give me the name of the rule applied here (you say chain rule, I'm used to it in a probabilistic context (mostly Markov Models), and I just don't get it here...) Sorry about that, otherwise awesome video, thanks for uploading it here ! :)
@fmp000
@fmp000 8 жыл бұрын
when you differentiate log(1-p), through the chain rule you get 1/(1-p)*(-1). This "(-1)" comes from the differentiation of (1-p)
@benernest7896
@benernest7896 9 жыл бұрын
Basic question from a non-math person. How did N*X_bar*log(P_hat) become N*X_bar/P_hat? Thanks.
@SpartacanUsuals
@SpartacanUsuals 9 жыл бұрын
Hi, thanks for your message. It became that after differentiating, since log(x) goes to 1/x. Hope that helps! Cheers, Ben
@manx306
@manx306 6 жыл бұрын
Isn't the derivative of log x 1/(x(ln 10))? I'm still confused by this step in the video
@133TME
@133TME 6 жыл бұрын
Hey, I know you posted this a couple of months ago, but just in case you never figured it out the reason is that Ben used e as the 'default' base for log. This is actually pretty common once you get into more advanced math, my CS lectures basically never use log_10 for instance. Any time log is used, the base is e unless otherwise specified. I find it annoying, as it is not was I taught in high school, but it is what is is. Pretty much, whenever he writes log, just substitute it with ln or log_e.
@manx306
@manx306 6 жыл бұрын
That is really helpful. Thanks!
@Pumbelchook
@Pumbelchook 8 жыл бұрын
This was so helpful! Thanks :)
@HKNAGPAL7
@HKNAGPAL7 6 жыл бұрын
Which editor/ software you use to make this video?
@alirezagyt
@alirezagyt 7 жыл бұрын
So remind me again how finding P^_ML is helping us with P?
@flyingpinkyfish
@flyingpinkyfish 8 жыл бұрын
this is great, thank you
@roottwo5459
@roottwo5459 5 жыл бұрын
take my upvote
@hugoirwanto9905
@hugoirwanto9905 4 жыл бұрын
Yeayy! Thank you
@ulisesperez1241
@ulisesperez1241 Жыл бұрын
thanks bro
@itbeginx
@itbeginx 8 жыл бұрын
Thanks!
@bunkerputt
@bunkerputt 7 жыл бұрын
Seven thumbs up.
@chrisie1997
@chrisie1997 5 жыл бұрын
ty
@jameshurliman8951
@jameshurliman8951 3 жыл бұрын
3 videos of dense calculus later... "Yeah it's just the average"
@EmperorDraconianIV
@EmperorDraconianIV 6 жыл бұрын
+Ben Lambert thank you my brother. If i ever become a billionaire i will gift your family a huge fortune.
@gonulakn387
@gonulakn387 5 жыл бұрын
What the fuck did you do at last
@trzmaier
@trzmaier 6 жыл бұрын
so much faffing for something this trivial
@Myndir
@Myndir 6 жыл бұрын
It's nice to know that there is solid algebra behind the intuition.
@ligapis
@ligapis Жыл бұрын
Ty
Maximum Likelihood estimation - an introduction part 1
8:25
Ben Lambert
Рет қаралды 634 М.
Maximum Likelihood For the Normal Distribution, step-by-step!!!
19:50
StatQuest with Josh Starmer
Рет қаралды 555 М.
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 18 МЛН
Random Emoji Beatbox Challenge #beatbox #tiktok
00:47
BeatboxJCOP
Рет қаралды 68 МЛН
Maximum Likelihood estimation of Logit and Probit
9:18
Ben Lambert
Рет қаралды 156 М.
Maximum Likelihood estimation - an introduction part 2
7:08
Ben Lambert
Рет қаралды 261 М.
Maximum Likelihood - Cramer Rao Lower Bound Intuition
8:00
Ben Lambert
Рет қаралды 131 М.
Panel data econometrics - an introduction
11:02
Ben Lambert
Рет қаралды 215 М.
Maximum Likelihood Estimation for the Bernoulli Distribution
18:53
Samuel Cirrito-Prince
Рет қаралды 48 М.
Probability vs. Likelihood ... MADE EASY!!!
7:31
Learn Statistics with Brian
Рет қаралды 37 М.
Probability is not Likelihood. Find out why!!!
5:01
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
Ice Cream or Surprise Trip Around the World?
00:31
Hungry FAM
Рет қаралды 18 МЛН