Nearly 20mins worth of derivation only to find out that the estimator = sample mean. #econometrics
@daniz0rz6 жыл бұрын
Right... I have an exam on Thursday for a categorical data class and this was never made clear in that class. And he wants us to be able to do this.... I was hoping for more practical applications (I'm an epidemiology major) than theory and derivatives. It's also quite likely I saw a ton of math and blacked out... XD
@TheBjjninja4 жыл бұрын
I had a chuckle at the very end also.
@mihaililiev59324 жыл бұрын
LOL, I thought the same. Beautiful derivation but what's the point? But then... there must be something more to the MLE, right? Can Ben perhpas expand a bit more on why is MLE useful?
@hamza_ME_4 жыл бұрын
@@mihaililiev5932 I'm learning Artificial intelligence and I can tell you MLE is very useful in forming and working with some algorithms of Machine Learning...
@lastua85624 жыл бұрын
@@hamza_ME_ Good to know! I am starting a research master in economics and consider moving into AI later in case I should dislike writing research papers. Aside from logistic regression, are there any econometrics topics you are particularly recommending to learn for AI?
@danthemangoman59316 жыл бұрын
One hour of college class with 2 hours of tutorials and barely understanding what the simple terms mean to having a full understanding in this. You my friend, is a saint.
@cliqclaq5086 жыл бұрын
I can't be the only one who loves it when a complicated function gets canceled out till it's really short. It's like those oddly satisfying videos.
@pkoomson1 Жыл бұрын
I am happy i did followed this 3-part series on estimation of ML. Thank you Ben for making this clearer to me.
@axelboberg9374 Жыл бұрын
in 10 min you explained MLE better than my text book! Thank you.
@ovauandjahera8664 Жыл бұрын
True
@mpgrewal008 жыл бұрын
Many tutors on youtube fail to explain with a simple example. But you didn't do that, you are an Ace.
@TheTorridestCheese7 жыл бұрын
Thank you very much for these videos. My statistics professor just breezes by these concepts and I really needed a much slower explanation and you helped a ton.
@HankWank9 жыл бұрын
You are a gentleman and a scholar.
@emielabrahams84755 жыл бұрын
Ben Lambert - these videos were awesome. Well explained - thank you kindly
@nature_through_my_lens4 жыл бұрын
Amazing. Got it in 15 mins... Watched all three videos at 1.25x. Thank you so much.
@fmp0008 жыл бұрын
What a kick-ass video, Ben! You have no idea how much it helped me.
@MrHugosky111 жыл бұрын
Great class! Thank you very much Mr. Lambert. Greetings from Mexico
@singhay_mle8 жыл бұрын
Beautiful explanation, i saw like 5 videos none were able to explain like you did..thank you!
@ravideeplehri7 жыл бұрын
Thankyou for taking out time to post these wonderful intuitive video lectures
@deansfa7 жыл бұрын
So well explained! Thank you very much for this three videos, it was very helpful!
@0blivisi0n10 жыл бұрын
You are a rockstar Ben! Greatly explained!
@SpartacanUsuals10 жыл бұрын
Hi, thanks for your message, and kind words. Glad to hear that it was useful. Best, Ben
@alandubackupchannel52018 жыл бұрын
Thanks for the video, great explanation. I wish lectures were this clear.
@Becca012234 жыл бұрын
Ben, you're the best. Thank you!
@daattali8 жыл бұрын
Great 3-part video, so easy to understand (if you really pay attention and put some thinking into it), thank you!
@robpatty18112 жыл бұрын
Great video, thanks Ben.
@cnaroztosun94028 жыл бұрын
the "i" you wrote at 3:20 is perfect omg
@askfskpsk7 жыл бұрын
Great explanation. Thanks Ben.
@bend.45066 жыл бұрын
Thank you! Enjoyed parts 1-3! VERY helpful!
@pavybez8 жыл бұрын
Thank you for the clear and concise explanation.
@pandeyprince256 жыл бұрын
an amazing explanation ! you are savior
@charlesledesma3054 жыл бұрын
Excellent explanation!
@seineyumnam43747 жыл бұрын
so why aren't college professors this good?
@barovierkevinallybose10405 жыл бұрын
I think some prof like for you t figure it out on ur own
@lastua85624 жыл бұрын
They are more advanced than Ben was at the time (before PhD). It is easier to teach when you know less and are closer to the students.
@adauche48755 жыл бұрын
Awesome video; very easy to understand. I like your explanations so much. Please when you get a chance, can you make a video on Maximum a Posteriori Probability Estimation. Thank you
@burninmind5 жыл бұрын
Awesome explanation! It was really helpful for me
@frogger8328 жыл бұрын
sorry if this is a dumb question, but I am still confused as to why we need this. why go through all the math just to get x bar? are there any examples where we actually need to perform this and the estimator is not intuitive or listed somewhere?
@SpartacanUsuals8 жыл бұрын
+Froggy Hi, thanks for your message. There are quite a few examples where it is difficult to intuitively work out what might be a good estimator. For example, look at section 4 here: times.cs.uiuc.edu/course/410/note/mle.pdf Understanding how likelihoods work is also very important for Bayesian statistics, see the playlist here: kzbin.info/aero/PLFDbGp5YzjqXQ4oE4w9GVWdiokWB9gEpm Best, Ben
@sunnyhours848 жыл бұрын
Hi Ben! I have a follow-up question! When I learn math I always try to fit the topic in a historical perspective. And seeing that the "method of moments" estimator also gives the exact same answer, namely 'x bar', one starts to wonder what came first?? Did people deal with these kinds of problem (ages ago) intuitively, or did they first work out all these different estimators only to find out that they produced intuitively answers? Who came up with e.g. "method of moments" and "maximum likelihood", and what did people use before? All I know is that the field of statistics is relatively young, compared to say Algebra and Analysis. (I haven't had time to read the pdf you linked to, maybe there is an answer there..)
@Muuip8 жыл бұрын
Notabot I am trying as well to grasp the benefits of complex calculations trying to estimate probabilities based on sample values. If a sample is too small, any inference/estimate by complex calculations won't be accurate. Only a bigger sample will raise the accuracy.
@XxAbZzXx1009 жыл бұрын
You are the man! Thanks Ben.
@unclecode2 жыл бұрын
Beautiful. Would you please share what hardware and software u r using for writing your lecture notes in these videos and recording ur screen? Thx
@tianshili764910 жыл бұрын
Hello, Ben! You made an excellent video about MLE and it really helps me understand the subject. But I have a question: what should I do if I don't know the distribution of the samples? For example, it's clear that in your case here it's a binary distribution. But what will you do to a population which you don't know what kind of distribution it has in advance?
@622948382 жыл бұрын
You involved the asymptotic argument (Ie central limited theorem) for it
@622948382 жыл бұрын
Some times , you will encounter something call the pseudo-true parameter. See Wooldrige cross sectional and Panel data textbook
@muhammadzhafranbahaman64019 жыл бұрын
Hi Ben, I would like to clarify is the P-hat is equivalent to tita-hat which is the point estimator for the parameter P which is simply pie(population proportion)?
@josemanuelromerogonzalez60842 жыл бұрын
Not all the heroes wear a coat. Thank you a lot!
@allall026 жыл бұрын
Great explanation, thank you!
@muhammadzhafranbahaman64019 жыл бұрын
Hi Ben, For this example, is the random variable follows Binomial distribution?
@Muuip8 жыл бұрын
Thanks for your presentation! At 2:40 you said "x bar times p hat times x bar" but wrote "x bar MINUS p hat times x bar". I assume "minus" is correct? Thank you.
@kingsleymichael75336 жыл бұрын
This really helped me alot. Thank you very much.
@CharlieFreakin10 жыл бұрын
Thank you! Very clear and understandable.
@leojboby7 жыл бұрын
Does this work if x_i is not equal to 1 for male? This whole thing was to say that if we use MLE to determine the population parameter that defines the probability of Y in the population, that we should use the mean of whatever we obtain from the sample... However, it says nothing about N...? Are there other ways to determine such a population parameter?
@Robertianocockrell6 жыл бұрын
do you have videos on method if moments for the gamma function and binomial?
@lastua85624 жыл бұрын
So we are using Log as Ln here, why are they equivalent in this case?
@ZishanAnsari10 жыл бұрын
Thanks for the great video Ben. So this means that the maximization function will always find the P_hat value to be same as the value of X_bar. For logistic regression model the probability cut-off that I get using Sensitivity and Specificity crossovers is always close to the value of X_bar. Is this the reason?
@SpartacanUsuals10 жыл бұрын
Hi, thanks for your message. Yes, the Maximum Likelihood estimator for the proportion of data which are of one type is always just the sample mean of the data in the binary case. It makes sense that the proportion of cases identified correctly as positive should mirror the actual value of X_bar heuristically. However, I'm not sure of a theorem etc. that would necessarily demonstrate this. Sorry I can't be of more help! Best, Ben
@ankurgangwar36225 жыл бұрын
in the second part there was log on left hand side , where is the log here
@TheTehnigga4 жыл бұрын
You could have explained with a graphical example what is small l. That part was quite confusing to understand.
@kennzinga8 жыл бұрын
omg thank you so much. It helped a lot.
@GiuseppeRomagnuolo9 жыл бұрын
Thanks Ben for these videos, are incredibly useful. I assume that by log(p_hat) we are referring to the natural logarithm ln(p_hat) and not the log_base10(p_hat)?
@SpartacanUsuals9 жыл бұрын
+Giuseppe Romagnuolo Yes, that's correct. Glad to hear the videos have been useful. Best, Ben
@Connor0isdabest9 жыл бұрын
does anyone know what video he proves why dL/dp = 0 is the same as dl/dp = 0?
@SpartacanUsuals9 жыл бұрын
Connor0isdabest It is this video: kzbin.info/www/bejne/mpXUn6xplr-Bhrs
@Connor0isdabest9 жыл бұрын
so simple! thanks so much
@goldfishyzaza577010 жыл бұрын
I do not understand the differentiation part of the log likelihood function. Where did p-hat suddenly come in? Have you just substituted p for p-hat?
@SpartacanUsuals10 жыл бұрын
Hi, thanks for your message. After differentiating the log likelihood with respect to p, this leaves a function of p. However, to find the maximum likelihood estimator for p - which I call p hat - it is necessary to set this derivative to zero. This defines p-hat. I hope that helps. Best, Ben
@Jessica55879 жыл бұрын
Hi ! Great videos, thanks a lot ! There's just one thing I couldn't quite get: the minus at the end, I know you explained it, but I'm lacking some basics in maths and I can't understand where it's coming from, sorry: N X_bar / P_hat MINUS ... Would it be possible for you to give me the name of the rule applied here (you say chain rule, I'm used to it in a probabilistic context (mostly Markov Models), and I just don't get it here...) Sorry about that, otherwise awesome video, thanks for uploading it here ! :)
@fmp0008 жыл бұрын
when you differentiate log(1-p), through the chain rule you get 1/(1-p)*(-1). This "(-1)" comes from the differentiation of (1-p)
@benernest78969 жыл бұрын
Basic question from a non-math person. How did N*X_bar*log(P_hat) become N*X_bar/P_hat? Thanks.
@SpartacanUsuals9 жыл бұрын
Hi, thanks for your message. It became that after differentiating, since log(x) goes to 1/x. Hope that helps! Cheers, Ben
@manx3066 жыл бұрын
Isn't the derivative of log x 1/(x(ln 10))? I'm still confused by this step in the video
@133TME6 жыл бұрын
Hey, I know you posted this a couple of months ago, but just in case you never figured it out the reason is that Ben used e as the 'default' base for log. This is actually pretty common once you get into more advanced math, my CS lectures basically never use log_10 for instance. Any time log is used, the base is e unless otherwise specified. I find it annoying, as it is not was I taught in high school, but it is what is is. Pretty much, whenever he writes log, just substitute it with ln or log_e.
@manx3066 жыл бұрын
That is really helpful. Thanks!
@Pumbelchook8 жыл бұрын
This was so helpful! Thanks :)
@HKNAGPAL76 жыл бұрын
Which editor/ software you use to make this video?
@alirezagyt7 жыл бұрын
So remind me again how finding P^_ML is helping us with P?
@flyingpinkyfish8 жыл бұрын
this is great, thank you
@roottwo54595 жыл бұрын
take my upvote
@hugoirwanto99054 жыл бұрын
Yeayy! Thank you
@ulisesperez1241 Жыл бұрын
thanks bro
@itbeginx8 жыл бұрын
Thanks!
@bunkerputt7 жыл бұрын
Seven thumbs up.
@chrisie19975 жыл бұрын
ty
@jameshurliman89513 жыл бұрын
3 videos of dense calculus later... "Yeah it's just the average"
@EmperorDraconianIV6 жыл бұрын
+Ben Lambert thank you my brother. If i ever become a billionaire i will gift your family a huge fortune.
@gonulakn3875 жыл бұрын
What the fuck did you do at last
@trzmaier6 жыл бұрын
so much faffing for something this trivial
@Myndir6 жыл бұрын
It's nice to know that there is solid algebra behind the intuition.