Wow best explanation and example ever I saw ^^ Fantastic.
@marciasola4625 жыл бұрын
Excellent
@romanemul19 ай бұрын
exactly. These pacient disease examples were driving me nuts.
@welcome333337 жыл бұрын
this is by far the most accessible explanation of Bayes theorem. Well done Brandon!
@claudiorio3 жыл бұрын
I know the video is old but I have to agree with the pinned comment. I already knew Bayes Theorem buy as I don't use it often, I have to be constantly refreshing the details in my mind. KZbin algorithm recommended this video and it's hands down the best I have ever watched.
@BrandonRohrer3 жыл бұрын
Thank you. I really appreciate that.
@Twopheek Жыл бұрын
This is the most accessible explanation about Bayesian Inference. Thank you Brandon for the time taken to prepare this video. You rock !
@danrattray88848 жыл бұрын
Best explanation of Bayes theorem I have seen. Fantastic teaching.
@shirshanyaroy2878 жыл бұрын
Brandon I just want to tell you that you are a fantastic teacher.
@BrandonRohrer8 жыл бұрын
Thank you very much Shirshanya. That is a huge compliment. I'm honored.
@shirshanyaroy2878 жыл бұрын
Please make more statistics videos! I have suggested your channel to my biostat teacher.
@richardgordon6 ай бұрын
Wow! One of the clearest explanations of Bayes Theorem I’ve come across!
@BrandonRohrer6 ай бұрын
Thanks!
@maxinelyu78757 жыл бұрын
when your teacher don't make sense, had to go through teaching videos online and came across this one... Lucky lucky lucky! Thank you Mr!
@jithunniks6 жыл бұрын
The way you connect things with appropriate easy to examples...Amazing...
@razvanastrenie14553 жыл бұрын
Excellent explanation ! This is the manner in which mathematics must be explained. With cases of practical applicability. Good job mr. Brandon !
@BrandonRohrer3 жыл бұрын
Thanks Razvan. I'm so happy you enjoyed it.
@theinsanify78026 жыл бұрын
i don't know what to say, i'm a computer science student and i have never seen an explanation better than this ... thank you veryyyyy much
@nginfrared7 жыл бұрын
EXCELLENT EXPLANATION!!! I am learning graphical modeling and a lot of these concepts were a bit unclear to me. Examples given here are absolutely to the point and demystify a lot of concepts. Thank you and looking forward to more videos.
@Skachmo17 жыл бұрын
There are never lines at the men's room.
@selamysfendegu82396 жыл бұрын
haha.
@dinikosama6 жыл бұрын
lol
@Beebo5 жыл бұрын
Unless it's cocaine.
@masterpogi18184 жыл бұрын
So the probability of this sample being true is 0%, hahhaha
@dhanushram61374 жыл бұрын
You have never been to developer conferences :)
@fahad38028 жыл бұрын
I have been struggling with bayesian inference and your tutorial makes it so easy to understand! Thank You! Keep up the good work.
@BrandonRohrer8 жыл бұрын
I'm very happy to hear it Fahad. Thanks.
@nutrinogirl4567 жыл бұрын
This is the first time I've felt like I've actually understood this... and it's such a simple concept! Thank you!
@rjvaal7 жыл бұрын
Thank you for this excellent explanation. You are a patient and well-spoken teacher.
@regozs5 жыл бұрын
This is the best explanation yet, it helped me get a greater intuitive sense of Bayesian inferences.
@brendawilliams80622 жыл бұрын
Yes it was great. It seems running into Feigenbaum maths or simular
@SupremeSkeptic7 жыл бұрын
Definitely the best explanation of the theorem told in an easily understandable way, I can find in the internet...
@syedmurtazaarshad34348 ай бұрын
Loved the analogies with real life philosophies, brilliant!
@yurysambale7 жыл бұрын
Stop looking for a descent tutorial... this one is the best!
@rameshmaddali62084 жыл бұрын
For 5 years i kept Bayes aside , you are the guru in teaching stuff.. God bless you Brandon
@AakashBhardwaj-dk3mi Жыл бұрын
Hey @Brandon Rogers, At 18:12, y axis is likelihood not probability. Probability is area under curve for this graph.
@ukktor7 жыл бұрын
The knowledge that Bayes was a theologian and that his theory requires at least some belief or faith in improbable things earns a "Well played, Mr. Bayes" slow clap. I've been enjoying your videos Brandon, thanks for keeping things approachable!
@themennoniteatheist12553 жыл бұрын
I’m not sure that Bayesian epistemology requires a belief in improbable things. I love this video, but I think that’s an overstatement. I do think that it requires us to be open to the possibility that improbable things may be true. It does not require me to have faith in anything improbable, but rather to proportion our beliefs to the evidence (probabilities)- which is the antithesis of faith-while accepting the possibility of being wrong. To accept something as true that is improbable… is intellectually irresponsible and lacks due caution and humility. But to withhold belief (proportionally to the evidence) from improbable things is intellectually responsible and does not exclude being open to surprise-to the possibility that something improbable is true. I don’t think Bayesian epistemology intrinsically expects you to hold as true some improbable thing (faith). Abstinence from faith is acceptable in all cases as long as the possibility of error is operative. This suggestion that it’s necessary in Bayesian epistemology to believe something that is improbable was the only sloppy part of the video, no? I’m open to correction…
@ahmedemadsamy42442 жыл бұрын
I think you guys got it the opposite way, the video was trying to say, be open to believe in the improbable things that come from the data (evidence), rather than only holding on a prior belief.
@himanshu80066 жыл бұрын
I must say, this is the explanation of Bayes theorem, I have ever seen..... PERFECT!!!!!
@liucloud63175 жыл бұрын
I have viewed many explanations about Bayes rule but this is no doubt the best! Thanks Brandon
@bharathwajan6079 Жыл бұрын
hey man,this is the one good explanation for conditional prob i had ever heard
@BrandonRohrer Жыл бұрын
Thanks!
@kimnguyen12277 жыл бұрын
This is fantastic. Thank you so much! I have been exposed to BT before, but have never understood it. As sad as it sounds, I didn't realize it was composed of joint probability which is composed of conditional probability, and marginal probability. Conditional probability and joint probability, and Bayes theorem all just looked the same. This really helped clarify things for me.
@SamuelShaw19866 жыл бұрын
Great explanation and video lesson production. Best Bayesian lesson I've found on youtube
@taghreedalghamdi68125 жыл бұрын
I was reading about Bayes Theory for months ! And this is the first time I understand the concept!! Wow!! such an amazing way of teaching!!
@BrandonRohrer5 жыл бұрын
I'm so happy to hear it Taghreed. That was exactly my hope.
@ssundaraju5 жыл бұрын
Great explanation and simplification of a difficult concept. The three quotations at the end are poetic and purposeful. Thanks
@BrandonRohrer5 жыл бұрын
I found them surprising relevant too. Thanks Sridhar.
@karthiksalian57154 жыл бұрын
Best video I found with all the information that I needed at one place. Thanks.
@redserpent6 жыл бұрын
Thank you. Your video has been of great help. I have tried different resources to wrap my head around Bayesian theorem and always got knocked out at the front door. Excellent expalnation
@raghurrai6 жыл бұрын
I wish everyone taught like this. Your presentation was awesome. Thank you
@attrapehareng7 жыл бұрын
You're very good at explaining and also you go in some details which is nice. Too often youtube tutorials are too simple. keep going.
@keokawasaki78336 жыл бұрын
now that you have said that (an year ago), i kinda feel like finding the probability of likelihood of a youtuber making too simple tutorials!
@WahranRai3 жыл бұрын
0:40 Bayes wrote 2 books one about theology and one about probabilty. He planed to write a third book infering the existence / non existence of God with probability (likelihood distribution = humanity, Prior distribution= miracles !
@johneagle43842 жыл бұрын
Great example! Very easy to follow and understand. On a side note: I showed your video to my students and some of them objected rather "emphatically". They said it was too sexist. Crazy times we live in....Instead of math and statistics, they wanted to discuss gender roles and stereotypes in a Stat class. Gosh!
@BrandonRohrer2 жыл бұрын
Thanks John! I agree with your students. When I watch this now, I cringe. I definitely need to re-do it with a better example, one that doesn't reinforce outdated gender norms.
@johneagle43842 жыл бұрын
@@BrandonRohrer No....Please, do not follow the mad crowds... this an innocent, simple math example. People are getting crazy and finding excuses to feel offended and start meaningless fights!
@Terszel Жыл бұрын
So sad. Long hair and standing in the womens restroom line and we cant even use Bayes' thereom to assume its a woman 😂
@fredvin276 жыл бұрын
I'm confused how do you calculate the probabilities in @17:56 P(m=13.9|w=17) and so on?
@KunwarPratapSingh419516 жыл бұрын
I was looking for intuitive content to introduce me about essence of Bayes theorem in statistics, thanks for this. Luckily I found your blog about machine learning and robotics. That's all what I wanted under a roof, robotics, data science and machine learning.
@SuperJg0076 жыл бұрын
Amazing. I already knew what Bayes theorem was, but you have an awesome intro to Bayes. Thanks for the video.
@ingobethke24135 жыл бұрын
Great video. I'm slightly confused about the dog example at around 20:00: Why can we use the standard error of the prior distribution in the likelihood computation for the measurements? Don't we have to model distribution mean and spread seperately i.e. explore many different standard error values? Generally, is w supposed to represent only one number (e.g. the true dog weight in the example) or an entire distribution that can be characterised by its moments (with the true dog weight simply being its first moment)?
@Blooddarkstar8 жыл бұрын
This video deserves more thumbs up. I understood a lot on a lazy sunday evening :) great explaination.
@dmhowe20017 жыл бұрын
I thought this was not only a great example of Bayes but also a nice intro for Cox's Theorem. Nice jobQ
@BrandonRohrer7 жыл бұрын
* quickly looks up Cox's Theorem * Why, yes it does Donna. Thank you! :)
@Explorer42396 жыл бұрын
Thanks for excellent presentation! One question though: at 17:49 the P(m=[13.9, 14.1, 17.5]|w=17) is factorized as following: P(w=17|m=[13.9, 14.1, 17.5]) = P(m=[13.9, 14.1, 17.5]|w=17) = P(m=13.9|w=17) * P(m=14.1|w=17) * P(m=17.5|w=17) then at 20:47 the P(m=[13.9, 14.1, 17.5]|w=17) * P(w=17) is expanded into: P(w=17|m=[13.9, 14.1, 17.5]) = P(m=[13.9, 14.1, 17.5]|w=17) * P(w=17) = P(m=13.9|w=17) * P(w=17) * P(m=14.1|w=17) * P(w=17) * P(m=17.5|w=17) * P(w=17) how do you get that P(m=[13.9, 14.1, 17.5]|w=17) * P(w=17) is equal to P(m=13.9|w=17) * P(m=14.1|w=17) * P(m=17.5|w=17) * P(w=17)^3 ? Thx in advance!
@keokawasaki78336 жыл бұрын
i hate stats because of those things... my teacher was teaching us utility analysis, and says "satisfaction is measured in utils" to that i ask "tell me how satisfied are you with your job and answer in the form: n-utils...". i still haven't got an answer!
@UpayanRoy-n6u4 ай бұрын
17:57 One query here. How is the P(m | w=17) distribution function calculated? What is the spread (S.D.)? How do we, for example, arrive at any certain value of the probability of getting m = 15.6lb given the true weight is 17lb? Thanks! Nice explanation.
@xxlolxx4476 жыл бұрын
This was the best explanation of Bayes I've ever heard, I had such a hard time wrapping my head around it from other sources
@urielvaknin69042 ай бұрын
Great video, many thanks! Can someone explain the part between 19:30 to 21:40? All the calculations and finding the final posterior distribution
@robertoarce-tx8yt Жыл бұрын
this was very intuitive explanation, man do more!
@joshuafancher31116 жыл бұрын
Excellent explanation. At the 15:20 and beyond is when everything really started to come together. Also thanks for deriving the formula at the 7:10 mark.
@EANTYcrown7 жыл бұрын
hands down best explanation i've seen, thank you
@billgiles96627 жыл бұрын
Brandon, GREAT explanations!! I am taking a "Math for Data Sciences" class and have been flying through it until the final week and "Bayes Theorem". Achk...... It was poorly explained and very confusing. I was going to drop the class as I just couldn't get it. After watching your KZbin explanation I am excited about the possibilities and understand the way it works - cool stuff! Thank you for all you do!!!
@Zachor-v-Aseh5 жыл бұрын
Excellent. For those for whom this is the first lesson on Bayes, you've left out a few steps here and there. But still excellent. It's difficult to make things understandable. You're excellent at it.
@BrandonRohrer5 жыл бұрын
Thanks Dov! And good callout - this focuses on the concepts and doesn't tell you quite enough to code it up. That will be the subject of a future course on e2eml.school
@Erin-uk2jj3 жыл бұрын
Brandon, this was great, thank you. Very easy to follow and really interesting and concise!
@BrandonRohrer3 жыл бұрын
Thanks Erin!
@wysiwyg24896 жыл бұрын
I believe you don't know much about statistics (the impossible thing), but I do believe you really know how to explain Bayesian Inference. Great video.
@sreekanthk29115 жыл бұрын
I have been searching for explanation like this for sometime and a big WOW to this guy. Wonderful explanation!!
@MonsieurSchue Жыл бұрын
Wow can't believe I only came across this video now. This is by far the best explanation on Bayes with great examples! Thanks @BrandonRohrer !! Love the example with the weight of puppy! May I ask if you have codes to deal with multiple priors/ multiple events? Say such as an extension of the weight of the puppy, if the weight change is more than one pound, plus she may be showing some other symptoms (say losing appetite), the likelihood of her being sick from something is x. Or even, losing appetite can be just due to weather being too hot. So the lost of weight of one pound from the last vet visit and losing appetite may not be significant at all and doesn't warrant multiple expensive test suggested by the vet.
@GeoffryGifari Жыл бұрын
How does the shape and mean of our prior distribution tell us the best end belief? our prior knowledge is that the puppy weighs 14.2 lbs last time and that weight changes aren't noticeable what if the mean of our prior distribution is chosen to be larger/smaller than 14.2 lbs? and what if we take that distribution to be narrower/broader (more than 1 lb std)? an extreme case, what if we first guess that the puppy's weight is *exactly* 14.2 lbs?
@williamliamsmith4923 Жыл бұрын
Amazing explanation and graphics!
@BrandonRohrer Жыл бұрын
Thanks!
@mathiasmews11224 жыл бұрын
You're so much better than my Statistics teacher, thank you so much for this explanation!
@BrandonRohrer4 жыл бұрын
Thanks Mathias!
@canmetan6704 жыл бұрын
Can someone explain what 20:22 means? 1) I get that P(w) is the probability of the weight w on the normal distribution that we have chosen - this is our prior. 2) P (m | w) is probability of measurement m happening in our distribution with P(m) - where it corresponds to in the normal distribution. 3) What is P(m)?? Thanks for the video and responses.
@dalelu94227 жыл бұрын
Simply the best ! Thank you Brandon
@Atoyanable3 жыл бұрын
such a well-thought -through video, very good explanations for every instance, the ending was the bonus, loved it, thank you
@BrandonRohrer3 жыл бұрын
Thank you Lilit! I appreciate that.
@joserobertopacheco2982 жыл бұрын
I am from Brazil. What a fantastic explanation!
@BrandonRohrer2 жыл бұрын
Thanks Jose! Welcome to the channel
@jinshuli40927 жыл бұрын
Excellent video! But one question: at 20:43, should we just multiplied P(w=17) once? where do the other two times P(w=17) come from?
@bobcrunch6 жыл бұрын
The dashed curve is P(m | w=17). There are three measurements (m), [13.9, 14.1, and 17.5]. All three must be computed and each multiplied by P(w=17) (the dotted curve) separately. The three results are then multiplied together. P(w=17) is very small, and thus P(w = 17 | m = [13.9, 14.1, 17.5] ) is very small.
@dbtmpl14376 жыл бұрын
@@bobcrunch I would disagree here. I thin @Jinshu Li has a valid point. The three measurements are independent w.r.t. w, and we can write them therefore as p(m_1|w)*p(m_2|w)*p(m_3|w). However, this does not affect the prior p(w) at this point. So the prior should be multiplied only once in my opinion.
@هشامأبوسارة-ن7و5 жыл бұрын
@db: I can't agree with you more as p(w) is our prior belief about w before seeing any data (measuremenrs ).
@hieuhienhoa292 жыл бұрын
The explanation is verry good. But i have been thinking a lot but can't understand why 20:42 you multiply P(w=17) three times.? Please help me.😢
@karannchew2534 Жыл бұрын
16:45 P(w) is constant as we assume w to be uniformly distributed. But why is P(m) also a constant?
@maayan19877 жыл бұрын
I loved the Mark Twain citation in the end! Great video thanks! However I have one question, do you assume normal distributions of the likelihood? Why so?
@thomasprobst8601 Жыл бұрын
Excellent introduction, thanks. Is there also a continuitaton concerning a graphic prior Selektion and Jeffreys priors?
@s455103256 жыл бұрын
the best explanation I ever seen! Super clear.
@Nifty-Stuff2 жыл бұрын
Absolutely brilliant! Your presentation, examples, etc. were perfect and applicable! Thanks!
@BrandonRohrer2 жыл бұрын
Thank you very much :)
@lenkapenka69763 жыл бұрын
Superb lecture - esp. the MLE explanation!
@ramkomusique2 жыл бұрын
Very clear examples and explanations, thank you! Do you happen to have some R workshops on how to apply Bayesian inference as well?
@BrandonRohrer2 жыл бұрын
Thanks! I don't have anything like workshops, but if I were looking for them I'd start with Richard McElreath xcelab.net/rm/courses/
@lonandon6 жыл бұрын
At 20:42, why that normal distribution is turned to the flat one? is it because we have a prior normal distribution and we need to fit 17 to its curve? It confuses me as it seems like there are two prior, prior(14) is used to make prior(17) flat and then prior(17) is used to calculate the probability.
@mariorodriguesperes15015 жыл бұрын
Great video!! Very nice and easy to digest explanation of Bayes theorem! Thank you very much for sharing this excellent material. I have got a better understanding on how to apply it to my problems. Keep the great work!
@sammorningstar68186 жыл бұрын
Great video!!..But i am confused at 14:00 in which we have got SD as 2.0 lb and in next slide you took a Normal distribution with SD of 1.2 lb. What is reason for not using sample SD and instead using SE for preparing Normal Distribution? If this requires a lot of explanation,then a link to any website explaining this will also be very helpful.
@sbz_6 жыл бұрын
Ya, i also had same doubt.
@bga9388 Жыл бұрын
Thank you for this excellent presentation!
@AI_ML_DL_LLM Жыл бұрын
a great video, i think the Bayes is going to syncing in my head, Two questions if i may? Q1: at 16:44 shouldn't C1 to be equal to C2? Q2: at 16:28, you ignored P(m) as you said it is constant. can you please explain why? In machine learning this term is the most problematic one and they go extra miles to address it, thanks
@amanjain33417 жыл бұрын
Thank you sooo much Brandon for explaining the concepts so clearly.
@Blazzerek7 жыл бұрын
14:05 In computation of standard deviation, why are you using 15.1 instead of 15.2 as calculated in the mean? And also division by 2 (instead of 3 that is number of data points).
@wexwexexort7 жыл бұрын
Július Marko In statistics you divide by n-1 to find sample's std dev. 15.1 must be nothing but typo.
@АлександрФедоров-п5э7 жыл бұрын
20:47 Why we multiply by P(w=17) 3 times instead of one? upd. oh, I found in the comments below. So that, shouldn't you don't put the equals between first and second line? Because they are really not equal And It is not go align Bayes Theorem. P.S. Sorry for my bad english
@jacktretton78154 жыл бұрын
best explanation I found on the topic so far. great work!!!
@jeppejwo8 жыл бұрын
Great explanation, but i'm having a hard time understanding why you should use the standard error, as the width when you calculate the likelihood, and not the standard deviation. Doesn't this mean, that you are calculating the probability, that your measurement is the true mean, and not the probability of getting that measurement given your mean? Or perhaps that's what you'r supposed to do?
@BrandonRohrer8 жыл бұрын
You hit it spot on, jeppe. The assumption that I glossed over is that Reign's actual weight will be the mean of all the measurements (if we kept taking measurements forever). However, since we only have a few measurements, we have to be content with an estimate of that true mean. The standard error helps us to determine the distribution of that estimate. We don't care about how far an individual measurement tends to be off, we only care about how far our estimate of the weight is off.
@ssrrapper7 жыл бұрын
At 14:25 you say: "one standard deviation on that curve is our standard error of 1.2 pounds". How can this be? How can standard error be the same as standard deviation when standard error equals standard deviation divided by the square root of the number of observations? Standard deviation is 2 pounds, not 1.2 pounds, correct? The previous slide shows: std dev = 2.0 lbs std err = 1.16 lbs
@randykintzley59236 жыл бұрын
Great video. I have a basic question though. At 17:53 You point to y-axis values on the normal curve as probabilities. I thought probability was area under the curve between two points. i.e. probability of an individual point on the continuous curve is zero. I'm missing something fundamental here. Maybe you can provide additional explanation? Thanks!
@УэстернСпай3 жыл бұрын
I was wondering the same....
@rasraster2 жыл бұрын
I don't understand your method of getting the posterior distribution. There is nothing in Bayes' formula that implies a summation over all possible parameters in the likelihood function. It seems to me that the way to get the posterior is to compute the likelihood distribution based on measured samples and then for each possible weight to multiply the value of the (static) likelihood distribution by the value of the prior distribution. Can you explain why your method works? Thank you.
@houssemguidara44674 жыл бұрын
Thank you for the video, it helped me understand the concept of Bayesian inference. The concept is simple. In a nutshell, you have an idea about what the quantity is and then you use the measurements to sharpen your assumption.
@frbaucop10 ай бұрын
Bonjour Q1 : At 5:42. Where the .96 comes from? The square "says" P(woman==0.2 , P(man) = .98. Should we read P(man AND long) = P(man) * P(man | long) = 0.98 * 0.04 = 0.04 Q2 : At 17:00 I understand the mean of the normal distribution in the back is 17. OK, but what is the standard deviation. Is it equal to the one calculated with the 3 values (13.9, 17.5, 14.1) , do we use the standard error or something else? This is not yet clear for me. Merci
@StephenHsiang7 жыл бұрын
best explanation on youtube so far
@kirankk95657 жыл бұрын
the best explanation i hv seen about bayes theorem... awsome... thnx a ton....
@specialkender5 жыл бұрын
GOD THANKS FOR EXISTING. Finally somebody that fucking breaks down the most important part of it (namely, how the do you calculate the likelyhood in practice). I hope life rewards you beautifully.
@tracyshen4307 жыл бұрын
the math at 5:39 timeline has an error. it should be 0.98*4/98=0.04. i checked your blog. it was also wrong. There's no way that the product of the two numbers will become 0.04. Please correct this video and your blog
@mariamedrano43487 жыл бұрын
Excellent examples and explanation! Now everything is so much clearer. :)
@eyesonthetube6 жыл бұрын
Thanks for the excellent video. A good refresher! Keep up the good work!
@lemyul5 жыл бұрын
i like the quotes you put at the end and how you reword them
@lonandon6 жыл бұрын
At 18:23, he said that by the time we are done, what does that mean? Does it mean the sum of the probabilities of those three measurements are the maximum?
@pythonicly26463 жыл бұрын
At 6:51, Cant we simply calculate P(man | long hair) = Total number of men with long hair/ total number of people with long hair = 2/27 = 0.07? (for cinema example)
@balasubramanianilangovan8884 жыл бұрын
Hi Brandon, your video was simple, superb, and stupendous!
@SeverSava2 жыл бұрын
On min 07:20 you I found a bit difficult to understand the difference between P(man with ling hair) and P(man | long hair). I guess the first refers to the probability of being a man with long hair from the entire cohort of man and women, whereas the second refers the the probability of being a man with long hair but just from the men's cohort. Is it right?
@AnasHawasli Жыл бұрын
The best explaination on youtube thank you man
@boyangfu8205 жыл бұрын
May I know why the standard error decrease after applying MAP in 21:25?
@abdulmukit44204 жыл бұрын
At 20:42 why P(w=17) is multiplied 3 times? Shouldn't it be P(m=13.9 | w=17) * P(m=14.1 | w=17) * P(m=17.5 | w=17) * P(w=17) ?