What is D-Separation? | Conditional Independence

  Рет қаралды 23,378

Machine Learning & Simulation

Machine Learning & Simulation

Күн бұрын

Пікірлер: 42
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Errata: 14:00 The rule is noted down incorrectly. I mistook B & C. Correct would be p (A, B) = p (A, B). Thanks to @Ngoc Anh Nguyen for pointing this out. The file on GitHub has been updated accordingly: github.com/Ceyron/machine-learning-and-simulation/blob/main/english/probabilistic_machine_learning/directed_graphical_models_d_separated.pdf 17:13 The result should, of course, be that N&O are d-separated given W. (I wrote (and also said) that N&P were d-separated given W, which is not true!). Thanks to the anonymous user who spotted this.
@haishanhuang-zd3zx
@haishanhuang-zd3zx 6 ай бұрын
Thank for the useful video! But still have a small question in this part: in basic rule 3, do we say A and B is d-separated by C or A and C is d-separate by B? Get a bit confused at this part.
@mikelmenaba
@mikelmenaba 2 ай бұрын
Hello, great video! Why are N&P not d-separated given W if the path is indeed blocked? (at H)
@a.e-u2c
@a.e-u2c 3 жыл бұрын
Best D-seperation explanation out there. Thank you so much
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Thanks a lot for the feedback :)
@souravdey1227
@souravdey1227 3 жыл бұрын
This is by far the best explanation of d-separation. These concepts are hard to grasp. Illustrating with examples really clears a lot of grey areas
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Thanks so much :) It was the same for me. Examples really helped me a lot in getting the full understanding. I also did a video on how to check for d-separation in Python using NetworkX: kzbin.info/www/bejne/Z6iwi5ybn9J6jbc I always find using libraries or coding it down yourself particularly valuable.
@souravdey1227
@souravdey1227 3 жыл бұрын
@@MachineLearningSimulation checked it out. As again, clear and crisp. Thank you so much.
@sh4ny1
@sh4ny1 Жыл бұрын
hi, i am a bit confused, in a previous video you talked about how if the arrow -> is from W->H the joint p(W,H) should be p(H|W)p(W) then now in at 08:55 we have the something similar where W->H->P so shouldn't the p(W,P,|H) = p(H|W)p(P|H). Thank you
@MachineLearningSimulation
@MachineLearningSimulation Жыл бұрын
Thanks for the question :) Here, we introduced an observed variable. That changed the game a bit. The goal of these simple rules is no longer to just factor the joint (these rules will always hold in directed Graphical models), but to find how observed variables change the relation between other variables. The rule I noted down was purely based on arguing.
@sh4ny1
@sh4ny1 Жыл бұрын
@@MachineLearningSimulation thank you for your clarification. so based on what i can understand this is due to the fact that in this specific problem we had an observed variable that depends on two unobserved ones. since only "H" is given so we would say that given H the "W" and "P" are independent. additionally i also watched some other videos related to active and inactive paths between triples in a graph. that also made this concept somewhat clear. additionally, could you please share the reference material ? I am trying to read some papers on variational autoencoders and every paper introduces some notations that throw me off. i am trying to get to the bottom of this. haha
@MachineLearningSimulation
@MachineLearningSimulation Жыл бұрын
Of course :). Probably takes some practice to internalize these rules. I can recommend doing some examples with "networkx" (a graph theory library in python). I also have a video on this (should be the next one in the probabilistic ml series). A general reference is bishops "pattern recognition and machine learning".
@rayx.5602
@rayx.5602 3 жыл бұрын
@11:05: should it be Berkson's paradox instead?
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
I think that Berkson's paradox is related to sampling bias, therefore it should be Simpson's, but I could be wrong. Maybe that link could be resource: stats.stackexchange.com/questions/445341/simpsons-paradox-vs-berksons-paradox What do you think?
@user-or7ji5hv8y
@user-or7ji5hv8y 3 жыл бұрын
In your last example, why is H a block given that H is not observed.
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Fair question. I think I wasn't too precise on this one. My initial goal was to show that "N" and "O" are d-separated given "W". In this case "H" is blocking because rule 3 applies (the case with Simpson's paradoxon). In essence, we would then have two nodes that are blocking on our way from "N" to "O". But one important point I missed: "H" is only blocking when looking at the conditional independence from "N" to "O". When we would look at the relation from "N" to "P", H is not blocking anymore (if it is latent; if it was observed, it would of course be again because of rule two) I hope this makes sense. Let me know if it is still confusing.
@keeperofthelight9681
@keeperofthelight9681 2 жыл бұрын
Why did you remove the playlists :’(
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
What are you referring to? The playlist should still be available: 🎲 Probabilistic Machine Learning: kzbin.info/aero/PLISXH-iEM4JlFsAp7trKCWyxeO3M70QyJ
@user-kn7fm1jm2y
@user-kn7fm1jm2y 3 жыл бұрын
Great content, thanks a lot! Just to make sure, you meant N & O, right? Because N & P don't seem to be d-separated (H is not observed so you can go from N to P through H). Thanks again :)
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Hey, thanks a lot :) You're absolutely right, I meant N & O.
@qiguosun129
@qiguosun129 2 жыл бұрын
Thanks for the lecture and since I am working on a article about DAG, if you have any paper published about this, I would love to cite them.
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Hey, thanks for the amazing feedback :) I am super happy, I could help to that extent. There is no publication of mine in that regard. It is not my primary field of research. However, if it is an informal article, you could cite the GitHub Repo: github.com/ceyron/machine-learning-and-simulation (on the right side of that page you will find the button "Cite this repository" which produces a bibtex file for you). If that is not appropriate for the publication you plan, then I am equally happy if you could spread the word about the channel and promote it in your environment, or maybe give it a shoutout on social media (if applicable). Thanks again
@qiguosun129
@qiguosun129 2 жыл бұрын
@@MachineLearningSimulation Thanks for the reply!
@NgocAnhNguyen-si5rq
@NgocAnhNguyen-si5rq 2 жыл бұрын
Hi! Great content! But I think the third rule should be P(A)P(B) = P(A,B) with no conditional on C
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Hey, thanks for the reply :) To which part of the video are you referring? (Maybe a time stamp) If I remember correctly, this should be how I presented the rule. The third basic rule should: marginal independence, but conditional dependence.
@NgocAnhNguyen-si5rq
@NgocAnhNguyen-si5rq 2 жыл бұрын
@@MachineLearningSimulation It's 14:00. Correct me if I'm wrong ^^.
@NgocAnhNguyen-si5rq
@NgocAnhNguyen-si5rq 2 жыл бұрын
@@MachineLearningSimulation Yes, if A-> C and B->C and C is unobservable, then A and B are independent, but A and B are conditional dependent if we control C. I think you just mistook B & C.
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
@@NgocAnhNguyen-si5rq You are right :) Good catch. Indeed, there is a mistake in the presentation. (I switched B & C) I will leave a pinned comment. Thanks a lot for figuring this out. :) Unfortunately, I do not have access to my written files from this older video. I will try to correct the PDF on GitHub as soon as possible.
@NgocAnhNguyen-si5rq
@NgocAnhNguyen-si5rq 2 жыл бұрын
@@MachineLearningSimulation You're welcome. Keep up with your good work ^^.
@Ash-hl1km
@Ash-hl1km Жыл бұрын
Hi, If P is observed, is N and O conditionally independent given P? My thinking is that W is not blocked but I am confused if H is blocked or not. H looks like scenario 3 which would mean it is blocked but am unsure haha... great video btw
@MachineLearningSimulation
@MachineLearningSimulation 11 ай бұрын
Hi, Thanks a lot for the kind feedback :). Sorry for the delayed response; I hope it is still helpful 😊. Correct is: "N & O are NOT conditionally independent given P". I missed one detail for rule (3) in that it also holds if a descendant of that node is observed. So, in your scenario, "P" is observed, and "P" is a descendant of "H". As such, rule (3) applies to the triplet N -> H
@Mueen520
@Mueen520 2 жыл бұрын
Thank you so much!
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Glad it helped!
@omarperezr
@omarperezr 2 жыл бұрын
Thank you so much.....
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
You're very welcome 😊
@jananpatel9030
@jananpatel9030 7 ай бұрын
Exam in 20 minutes, thanks haha
@MachineLearningSimulation
@MachineLearningSimulation 7 ай бұрын
Best of luck! 😉
@mohamadroghani1470
@mohamadroghani1470 3 жыл бұрын
perfection!
@MachineLearningSimulation
@MachineLearningSimulation 3 жыл бұрын
Nice streak of comments, love it :) Let me know if you have any additional topic proposals or things you would want to see covered.
@davidkoleckar4337
@davidkoleckar4337 2 жыл бұрын
Das english
@MachineLearningSimulation
@MachineLearningSimulation 2 жыл бұрын
Ich hoffe, es hat einen guten Eindruck hinterlassen ;) Lass mich wissen, falls was unverständlich ist
Conditional Independence (d-separation) with Networkx in Python
4:30
Machine Learning & Simulation
Рет қаралды 2,1 М.
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН
It works #beatbox #tiktok
00:34
BeatboxJCOP
Рет қаралды 41 МЛН
Don’t Choose The Wrong Box 😱
00:41
Topper Guild
Рет қаралды 62 МЛН
L12 Bayes Net: D-Separation Examples
14:04
Alice Gao
Рет қаралды 4,3 М.
What is a latent variable?
6:43
Machine Learning & Simulation
Рет қаралды 9 М.
Bayes theorem, the geometry of changing beliefs
15:11
3Blue1Brown
Рет қаралды 4,6 МЛН
Undirected Graphical Models
18:27
Bert Huang
Рет қаралды 67 М.
17 Probabilistic Graphical Models and Bayesian Networks
30:03
Bert Huang
Рет қаралды 98 М.
The Bridge That Changed the Map of Europe
16:58
The B1M
Рет қаралды 1 МЛН
Variational Inference | Evidence Lower Bound (ELBO) | Intuition & Visualization
25:06
Machine Learning & Simulation
Рет қаралды 74 М.
Bayesian Networks: Conditional Independences and d-Separation
34:37
IIT Delhi July 2018
Рет қаралды 22 М.
D-Separation
20:27
Pieter Abbeel
Рет қаралды 103 М.
The Best Band 😅 #toshleh #viralshort
00:11
Toshleh
Рет қаралды 22 МЛН