Bayesian Networks

  Рет қаралды 328,078

Bert Huang

Bert Huang

Күн бұрын

Пікірлер: 61
@theedmaster7748
@theedmaster7748 Жыл бұрын
My professor for AI explained this so badly that I had no idea what was going on. Thanks for this in-depth and logical explanation of these topics
@luckyshotjpg
@luckyshotjpg 4 жыл бұрын
Best explanation of probability I've received in my whole academic career, thank you
@chrisminnoy3637
@chrisminnoy3637 Жыл бұрын
Completely agree, my current professor makes it way to hard to understand, and I never understood what is the use of making things so abstract that students don't understand. What is then the point of education?
@fall9897
@fall9897 4 жыл бұрын
For anyone that has trouble wrapping their head around why variable elimination is more efficient, writing out the explicit for loops to compute P(Y) was really helpful for me: If we assume W,X,Y,Z each have K possible values then we need to compute K^4 values to fill out the complete table for P(Y). The naive triple sum has K^3 terms and we need to compute this triple sum for each of the K possible values of Y, giving us a total of K^4 values. If we do variable elimination then we first compute f_W(x): for each value of X, call this x: f_W(x) = 0 for each value of W, call this w: f_W(x) += P(w)*P(x|w) Note: - big capital letters denote the random variable, lower case letters denote a value of the corresponding random variable. - f_W(x) is a table containing K numbers, one for each value of X - the innermost operation is "constant time" because we are just looking up these values in a table. - in total it takes K^2 operations to compute the f_W(x) table and then we store it away. Next we compute f_X(y): for each value of Y, call this y: f_X(y) = 0 for each value of X, call this x: f_X(y) += P(y|x) * f_W(x) Note: - f_X(y) is a table containing K numbers, one for each value of Y - the innermost operation is "constant time" because we are just looking up these values in a table! - in total it takes K^2 operations to compute the f_X(y) table and then we store it away. Last we can now compute P(Y) for each value of y: for each value of Y, call this y: P(y) = 0 for each value of Z, call this z: P(y) += P(z|y) * f_X(y) Note: - P(Y) is a table containing K numbers, one for each value of Y - the innermost operation is "constant time" because we are just looking up these values in a table. - in total it takes K^2 operations to compute this last table. Computing P(Y) by variable elimination takes 3 * K^2 operations, which is much less than K^4 for large K! Basically, by computing each of these tables in the right order we avoid repeating work that we already did.
@themaninjork
@themaninjork 2 жыл бұрын
Thanks!
@ksha03
@ksha03 18 күн бұрын
@@themaninjork This is a great explanation.I had trouble wrapping my head around this
@jacobmoore8734
@jacobmoore8734 3 жыл бұрын
By far the best explanation of variable elimination; thanks for motivating via brute force/enumeration. For the longest time, it wasn't clear to me that VE was about computational spend not about being the only possible mathematical solution to a problem.
@prasad9012
@prasad9012 3 жыл бұрын
I was struggling to understand this in my class. Glad I came here.
@CapsCtrl
@CapsCtrl 3 жыл бұрын
i have an assignment on this that i need to deliver in two hours and this video is saving me right now!
@brandoncazares8452
@brandoncazares8452 3 ай бұрын
I struggled to understand this in my class, I'm glad I watched this video. These are very helpful.
@joshuasegal4161
@joshuasegal4161 5 жыл бұрын
Excellent video. You brought up a lot of small things that I was confused about and explained them
@aylavanderwal
@aylavanderwal 4 жыл бұрын
Your explanation is brilliant, it gives a very good intuition for the theory. Thanks a ton
@fupopanda
@fupopanda 5 жыл бұрын
Around 9:43 you simply say that P(S|W,R) is reduced to P(S|W) but you never give a more formal explanation of why. I know it's because of conditional independence. You could have easily added clarity by stating that you started with the chain rule of probability and then applied conditional independence assumption. That would save anyone who has learned basic probability theory a few minutes of their time, instead of making them pause to think through what just happened there.
@berty38
@berty38 5 жыл бұрын
Thanks! I'll definitely try to clarify that better next time I teach this topic.
@tobiasuhmann5088
@tobiasuhmann5088 5 жыл бұрын
Thanks for that note. To make it even more explicit for people who still had to think about it (like me): If two variables A and B are independent, P(A|B) = P(A). Here, S and R are independent (which is counterintuitive, as mentioned in the video). Therefore, P(S|W,R) = P(S|W).
@ajit_edu
@ajit_edu 4 жыл бұрын
12.23 doesn't c,r mean car wash AND ( not OR) RAIN as mentioned in lecture
@bitvision-lg9cl
@bitvision-lg9cl 2 жыл бұрын
Very impressive, you make the model crystal clear, and I know that compute bayesian network is nothing than that to calculate a probability (for discrete variables), or a probability distribution (for continuous variables) efficiently.
@zeratulofaiur2589
@zeratulofaiur2589 4 жыл бұрын
How come condition is "Rain or Carwash" not "Rain and Carwash"?
@caroleaddis1885
@caroleaddis1885 5 жыл бұрын
This was great- please do more!🙏🏼
@jeffreyyoung1604
@jeffreyyoung1604 4 жыл бұрын
This is a great video on Bayesian Network. Other people creating videos should take a note from this one.
@huangbinapple
@huangbinapple 4 жыл бұрын
What's the difference between enumeration and variable elimination anyway, still think it's only a difference in notation.
@curioussoul5151
@curioussoul5151 Ай бұрын
Best explanation on the internet
@RexGalilae
@RexGalilae 5 жыл бұрын
Wait. What do the commas actually denote? It seems confusing that they're being used to denote both "AND" and "OR" (Union and Intersection) like at 14:49. Can someone explain what's going on?
@chrisritchie7841
@chrisritchie7841 5 жыл бұрын
The commas represent values to be factored i.e. P(W | C, R) = P(W | C) P(W | R)
@jallehansen17
@jallehansen17 5 жыл бұрын
Great video. Would love to see the code for that assigment.
@rolfjohansen5376
@rolfjohansen5376 2 жыл бұрын
Does "variable-elimination" imply: "the overall network's functionality got changed"? thanks
@cosmopaul8773
@cosmopaul8773 Жыл бұрын
Thanks for great video! Helped me a lot in understanding this stuff for my Uni course :)
@nulliusinverba7732
@nulliusinverba7732 5 жыл бұрын
For the slip node, can we say that the slip node is conditionally independent from rain? Or is it independent? Or is it still related indirectly? Does the order of summations in variable elimination matter? Also what are observed and unobserved variables? Ie are ancestor variables observed variables? Or are they the marginalized variables? Or something else?
@Ptolémé-ll
@Ptolémé-ll Ай бұрын
Thank you ! Good introduction
@shan35178
@shan35178 2 жыл бұрын
which book is he using for the reference?
@anuragtiwari9053
@anuragtiwari9053 4 жыл бұрын
This was a literal saviour! Thanks a ton!
@xieen7976
@xieen7976 3 жыл бұрын
when u elimiate c, you have f(w), but where is r go?
@Recordingization
@Recordingization 4 жыл бұрын
Good lecture,that is a big help for me to understand baysian network and formula.
@BlackHermit
@BlackHermit 4 жыл бұрын
Great video, extremely clear and helpful. :)
@zhirongwang6610
@zhirongwang6610 4 жыл бұрын
your voice doesn't sound like your photo
@chingiskhant4971
@chingiskhant4971 3 жыл бұрын
wtf is this how is it so simple. had it always been this simple. thanks
@msds2930
@msds2930 3 жыл бұрын
Damn, what a voice. Thanks for this
@ea1766
@ea1766 2 жыл бұрын
Top tier video without a doubt.
@lancelofjohn6995
@lancelofjohn6995 3 жыл бұрын
Until now I understand bayesian network and the notation.
@ShubhamSinghYoutube
@ShubhamSinghYoutube 4 жыл бұрын
Came here searching for coal , found Gold ✌🏻✌🏻✌🏻✌🏻✌🏻
@magdalenapiekarczyk8750
@magdalenapiekarczyk8750 Ай бұрын
your voice is gorgeous!!!
@newbie8051
@newbie8051 Жыл бұрын
Very simple explanation, thans !
@isaacnattanpalmeira6657
@isaacnattanpalmeira6657 5 жыл бұрын
Really useful, thanks!
@ДуховныйРост-м8п
@ДуховныйРост-м8п 4 жыл бұрын
Thanks ! Very nice explanation !
@sultanyerumbayev1408
@sultanyerumbayev1408 6 жыл бұрын
thanks, for sharing this lecture video!
@lakshman587
@lakshman587 2 жыл бұрын
Thank you for the video!! :)
@superuser8636
@superuser8636 3 жыл бұрын
Great video. Thanks a lot!
@lamyaeelhaddioui4064
@lamyaeelhaddioui4064 5 жыл бұрын
Can you tell me what we need to know about this method of data mining Other than this, please.
@owendebest4183
@owendebest4183 3 жыл бұрын
love your voice bro!
@LMGaming0
@LMGaming0 4 жыл бұрын
Great video !
@ajayhemanth
@ajayhemanth 5 жыл бұрын
good explanation !
@morzen5894
@morzen5894 11 ай бұрын
very unclear and comfusing using venn diagrams to represent some of the probabilities and giving detail example of the math using numbers to show how it runs would be of great help, for people discovering the subject. I am fairly sure this is a great video for people who already understand the subject or have some grapst on it. But for new comer it is very confusing. not to mention the rise in difficulty between the first part which is quite easy to understand (although venn diagrams would help) and the second part which looks like elvish.
@jonashallbook6312
@jonashallbook6312 2 жыл бұрын
DId what my teacher tried to do in 1 hour in 5 minutes, and better so
@salmanabdulla9165
@salmanabdulla9165 4 жыл бұрын
great
@saisheinhtet2446
@saisheinhtet2446 9 ай бұрын
awesome
@BazzTriton
@BazzTriton 4 жыл бұрын
Very good
@micahchurch5733
@micahchurch5733 6 жыл бұрын
Great video but for the slipping bit your intuition isnt always true like it could be but if the ground is wet doesnt nessasarily mean it was raining as you said so it could not be raining and you could slip on dew covered grass. Loving this video tho as I dont know probability or bayesian classifiers which are in my literature for nns, okay you crossed out the intuition lol paused the video MB
@RishiRajvid
@RishiRajvid Жыл бұрын
from Bihar (INDIA)
@dr.merlot1532
@dr.merlot1532 Жыл бұрын
absolutely useless.
Markov Models
18:32
Bert Huang
Рет қаралды 13 М.
17 Probabilistic Graphical Models and Bayesian Networks
30:03
Bert Huang
Рет қаралды 99 М.
ССЫЛКА НА ИГРУ В КОММЕНТАХ #shorts
0:36
Паша Осадчий
Рет қаралды 8 МЛН
«Жат бауыр» телехикаясы І 26-бөлім
52:18
Qazaqstan TV / Қазақстан Ұлттық Арнасы
Рет қаралды 434 М.
How Bayes Theorem works
25:09
Brandon Rohrer
Рет қаралды 554 М.
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,6 МЛН
Transformers (how LLMs work) explained visually | DL5
27:14
3Blue1Brown
Рет қаралды 4,4 МЛН
The Bayesian Trap
10:37
Veritasium
Рет қаралды 4,2 МЛН
Bayesian Network | Introduction and Workshop
24:22
LiquidBrain Bioinformatics
Рет қаралды 18 М.
Naive Bayes, Clearly Explained!!!
15:12
StatQuest with Josh Starmer
Рет қаралды 1,1 МЛН
A visual guide to Bayesian thinking
11:25
Julia Galef
Рет қаралды 1,9 МЛН
A friendly introduction to Bayes Theorem and Hidden Markov Models
32:46
Serrano.Academy
Рет қаралды 484 М.