14.3 - Mediation
15:37
4 жыл бұрын
14.2 - Computing Counterfactuals
13:37
14.1 - Counterfactuals
5:23
4 жыл бұрын
Пікірлер
@ed9w2in6
@ed9w2in6 Күн бұрын
Thank you for your time and effort on putting up all these materials.
@gianlucama118
@gianlucama118 16 күн бұрын
I think the second point of the definition of the blocked path may contain a little mistake: as we require that none of the descendants of W is in the conditioning set Z, the formulation should be ( de(W) \cap Z = \emptyset ) , where \cap is the intersection symbol.
@gianlucama118
@gianlucama118 17 күн бұрын
As for many it took me a sec to digest both the local markov assumption and the example to justify the need for minimality, especially looking at the bayesian network factorization. Here are a few rewritings that made it clear for me and i did not found in some other comments: - local markov assumpion basically states that a node may (not must!) depend solely on its parents - when we have a graph (x -> y) and we use the bayesian network factorization (which is eq. to the markov local assumption) we get p(x,y) = p(x)p(y|x), however when we consider p(y|x) we should keep in mind that it may also happend that p(y|x) = p(y) when x and y are independent, which is the case of the graph (x y) where x and y are to unconnected nodes, hence the two possible explainations. In other words, similarly to the first point, remeber that p(y|x) means that the probability of y may (not must!) be conditioned to the probability of x.
@ajayg1305
@ajayg1305 20 күн бұрын
can the treatment being the cause of condition be calculated in a similar way? Adding rows? Did not get the right answer. Can someone post it?
@shangzhig5337
@shangzhig5337 27 күн бұрын
Thank you. Could anyone please direct me to the answers to those questions?
@ddozzi
@ddozzi 27 күн бұрын
yoooo this is insane stuff bro 💥💥💥💥 im hyped for this lecture!
@ceaavv
@ceaavv 29 күн бұрын
This course is fantastic. The subject is marvelous, and the teacher is formidably clear. Thank you very much for making this available publicly.
@ddozzi
@ddozzi 27 күн бұрын
this is so facts bro amazing course frfr
@karannchew2534
@karannchew2534 Ай бұрын
25:38 E[Y(0)] and E[Y(1)] are standard notations that mean E[Y|do(T=0)] and E[Y|do(T=1)]. This is the expected value od the potential under hypothetical ideal condition where T is applied and is not confounded by other factor. Y(1) and Y(0) represents hypothetical outcome under do(T). Y(0) is the same as Y|do(T=0). Both are common notation in casual inference analysis. do() is mostly from Pearl. Y, without (0) or (1), indicate the values from observation data.
@karannchew2534
@karannchew2534 Ай бұрын
29:01 What is Positivity? Positivity means that for every combination of confounders (or covariates) C in the population, there is a non-zero probability of receiving each level of the treatment T. In other words: There must be individuals in the data who have some chance of being treated and some chance of not being treated for all relevant subgroups defined by C. Mathematically, this is often expressed as: P(T=t, W=w) > 0, for all t and w
@karannchew2534
@karannchew2534 Ай бұрын
16:05 The statement doesn't make sense. To make Average Treatment Effect equal to associational difference (in another words, to make the difference observed between the two groups the same as the casual difference), we need to ensure Ignorability. And Exchangeability. Ignorability: ( Y(1), Y(0) ) ∟ T 16:05 But this statement doesn't make sense. "Potential outcome Y(1) and potential Y(0) are independent of the Treatment." How is that possible?? The outcome is surely dependent on the treatment?? Isn't that the purpose of treatment? "Having headache and not having headache is independent of "wearing shoes"" - doesn't make sense. May be it tries to say: The grouping of the subjects or the characteristics of the subjects being assigned the treatment are independent of the treatment? The potential outcome is independent of any other factors associated to the treatment other than the treatment itself? --
@karannchew2534
@karannchew2534 Ай бұрын
14:30 In this example, is confounding the reason the groups being unequal?
@jiaxinyuan3408
@jiaxinyuan3408 Ай бұрын
Can someone tells me why are all edges of the essential graph are undirected in 0:42? Isn't there immorality at node C?
@karannchew2534
@karannchew2534 Ай бұрын
25:26 ".. because you *can* observe that, you *can* compute.." 😵
@karannchew2534
@karannchew2534 Ай бұрын
Association can be due to: 1. Confounding 2. Casual Correlation is a type of association measurement. There can be other measurements. Correlation definition = LINEAR statistical dependence -- 26:28 Average treatment effect = Average of the difference between the outcome with treatment and outcome without treatment in the population = E[Y(1) - Y(0)] = (assuming linearity of expectation) Difference between average of outcome with treatment and average of outcome without treatment among the population = E[Y(1)] - E[Y(0)] ≠ (if there's any confounding factor) Difference between *conditional expectations* ie. the average value based on simply grouping of or conditioning on treatment and non-treatment data, because that's a man-made mathematical/statistical operation that only results in statistical meaning ie. association/correlation that includes both confounding and the natural/true casual effects = E[Y|T=1] - E[Y|T=0] = (If there is no confounding factor) E[Y|T=1] - E[Y|T=0] -- The expected outcome of sub-population under treatment T = E[ Y(t) | W=w ] = E[ Y | do(T=t), W=w) ] = E[ Y | t, w ] *Marginalizing Over W* How to calculate the overall expected outcome across the entire population under treatment , not just for a specific sub-population, of small w? This is achieved by marginalising over W, meaning we take the expectation across all possible values of W, weighted by their probability distribution. E[Y(t)] = E[Y|do(T=t)] = E_sub_W E [Y|t, W] and the W in the last bit of the equation is a random variable. E_sub_W := The expectation over all possible values of W, treating W as a random variable. This accounts for the fact that different sub-populations (with different values of W) may have different probabilities of occurring in the population. *Why Marginalization?* 1. We want to compute the treatment effect E[Y(t)] averaged over the entire population, not just for a specific subgroup. 2. W might influence both the treatment assignment T and the outcome Y, so we must account for its distribution when aggregating over the population. Marginalisation ensures that the calculated Y reflects the combined contributions of all sub-populations, weighted appropriately.
@jiaxinyuan3408
@jiaxinyuan3408 Ай бұрын
How does faithfulness derives that X1 depends on X3 in 0:52? Doesn't Faithfulness tells us from data to graph?
@jiaxinyuan3408
@jiaxinyuan3408 Ай бұрын
I'm a little be confused at the definition of Instrumental unconfoundedness Assumption now. Here it is applied in 2:32, where backdoor paths are considered as from U->...->Z. Cuz in previous course 8.1, it's clearly written as un unblocked backdoor paths to Y. Does the Y refer to the outcome at certain path we care about?
@FerencChan
@FerencChan Ай бұрын
I have a question about the second step: why are ACD, ACE, and BCE not immorality,don't they also meet two conditions?
@jiaxinyuan3408
@jiaxinyuan3408 Ай бұрын
How could they meet the second condition that C should not be in the conditioning set that makes {A,D}, {A,E}, {B,E} conditionally independent?
@chongsun7872
@chongsun7872 Ай бұрын
I think the biggest confusion for me is that why X causes Y but X and Y can also be statistically independent for LMA.
@faanross
@faanross Ай бұрын
These lectures are amazing, thanks Brady
@SvetlanaBondareva
@SvetlanaBondareva 2 ай бұрын
Reading "The Book of Why" and watching the lectures! Love it! Thank you for the course Brady!
@DataTranslator
@DataTranslator 2 ай бұрын
How do we condition on the empty set ?
@chongsun7872
@chongsun7872 2 ай бұрын
I remembered I have read many many materials to try to understand Simpson's paradox and understand when to combine groups. This is THE most clear one I have ever listened to!
@joe_hoeller_chicago
@joe_hoeller_chicago 2 ай бұрын
Prof Pearl is the best!
@Al-Mahi
@Al-Mahi 2 ай бұрын
Diving into causal learning.!
@BenediktKnopp-z6y
@BenediktKnopp-z6y 3 ай бұрын
great explanation, helped me a lot to understand this!
@iffatarasanzida7704
@iffatarasanzida7704 3 ай бұрын
Very well described. Thank You!
@what_a_new_world
@what_a_new_world 3 ай бұрын
Thank you for your video. give me the answer please.... I have an exam tomorrow... okay bye....
@andreicristea997
@andreicristea997 3 ай бұрын
Great content! I am studying in the university now Causality subject and your youtube videos just give me the right amount of information to understand the topics that were more abstract to me (given just by the definitions). I just got confused though by the slide on the timestamp 0:25. You talk there about association (red dotted arrow), and it seems like it should two a two sided arrow, since association is purely statistical notion and doesn't have a direction. So I guess its either a typo or I misinterpreted something 😁
@rukman-sai
@rukman-sai 4 ай бұрын
Question: Hi, I don't know if this question is too late but here it goes. On slide 9, how does having the additional observational data increase the number of interventions from (N-1) to N instead of decreasing it. I understand that one of those 'N' interventions is the observational data. So is this observational data useless if we still need (N-1) more single interventions. I mean more data should decrease the number of additional single node interventions necessary right?
@choutycoh5828
@choutycoh5828 5 ай бұрын
ASSUME A TABLE CAN BLOCK the impact from THE C to X(7::45)
@TegegnBergene
@TegegnBergene 5 ай бұрын
could you be kind to to share stata sytax for counterfactual prediction in the Multinomial endogenous switching model- you can use my email that i will send if I learn you are willing to help me
@yogeshsingular
@yogeshsingular 5 ай бұрын
Really great work by you to make a complex set of topics in casual inference accessible to everyone
@peasant12345
@peasant12345 5 ай бұрын
Thanks for the video. It looks like in the 8:36 example the difference between counterfactual and potential outcome is that in counterfactual U is random but in potential outcome case , where things could be resolved by do calculus, U is an unknown predetermined parameter. Am I correct?
@RayRay-yt5pe
@RayRay-yt5pe 5 ай бұрын
For real man, thank you so much for everything you have done!!. You have made this stuff really accessible for folks like me!
@RayRay-yt5pe
@RayRay-yt5pe 5 ай бұрын
I think the middle notation is more intuitive to read.
@TheProblembaer2
@TheProblembaer2 6 ай бұрын
How would the immorality change if we were to condition on X2? Or is the argument at 29:33 about conditioning on X2? I thought If we were to condition on X2, P(x1) and P(x2) were to be dependent?
@peasant12345
@peasant12345 6 ай бұрын
the gunshot treatment. lmao
@TheProblembaer2
@TheProblembaer2 6 ай бұрын
I SAW THE DEATH STAR!
@moienr4104
@moienr4104 6 ай бұрын
Can someone explain at 2:00 why Brady wrote P(x_3) instead of P(x_3 | x_1)? Did he use the fact that x_1 and x_3 are independent? Isn't he already trying to prove that they are independent by getting to 2:47? Thank you!
@moienr4104
@moienr4104 6 ай бұрын
I got it! At 2:00 he used local Markov assumption!
@jiaxinyuan3408
@jiaxinyuan3408 Ай бұрын
P(x1)P(x3)P(x2|x1,x3) is there because of Bayesian Network Factorization
@JTedam
@JTedam 6 ай бұрын
Neal, I would like to propose an alternative explanation to yours using a scientific realism perspective on causal inference. You said, In scenario 1, treatment has an effect on the condition which has an effect on the probability of the outcome. And in scenario 2: the condition has an effect on the treatment which has an effect on the probability of the outcome. The treatment is the mechanism and the condition, the context or circumstances. I would argue the mechanism is the cause and the condition is the trigger. In scenario one, the outcome is considered in the basis of the mechanism alone and in scenario 2, the outcome is deduced on the basis of the trigger condition of the mechanism. Mechanisms are always triggered in context to create outcomes. Mechanism are causes but their outcomes are always shaped by context. I would also add that some mechanisms may be hidden, meaning there are other unobservable mechanisms - which under experimental conditions (!closed systems) can be controlled. In open systems, the outcome may not be easily predictable because of these unobservable mechanisms. In you Mr example, these mechanisms could be environmental - co-morbidities, contra-indications, emotional state, age related etc. So I would be cautious about relying on data driven inferencing alone.
@JTedam
@JTedam 6 ай бұрын
Neal, I would like to propose an alternative explanation to yours using a scientific realism perspective on causal inference. You said, In scenario 1, treatment has an effect on the condition which has an effect on the probability of the outcome. And in scenario 2: the condition has an effect on the treatment which has an effect on the probability of the outcome. The treatment is the mechanism and the condition, the context or circumstances. I would argue the mechanism is the cause and the condition is the trigger. In scenario one, the outcome is considered in the basis of the mechanism alone and in scenario 2, the outcome is deduced on the basis of the trigger condition of the mechanism. Mechanisms are always triggered in context to create outcomes. Mechanism are causes but their outcomes are always shaped by context. I would also add that some mechanisms may be hidden, meaning there are other unobservable mechanisms - which under experimental conditions (closed systems) can be controlled. In open systems, the outcome may not be easily predictable because of these unobservable mechanisms. In you Mr example, these mechanisms could be environmental - co-morbidities, contra-indications, emotional state, age related etc. So I would be cautious about relying on data driven inferencing alone.
@bethelosuagwu8021
@bethelosuagwu8021 6 ай бұрын
Thanks, I got so much intuition in the course! It must have been a lot of work for you to put it together!
@bethelosuagwu8021
@bethelosuagwu8021 6 ай бұрын
I note that from the python code on github, if you condition on Age alone the exact effect is recovered i.e 1.05. Is this because proteinuria is a collider? Does this mean that we should not include colliders in the linear regression?
@johnmcintire3684
@johnmcintire3684 7 ай бұрын
Was there a followup question to “Assistance to the poor or welfare question?” … something like …. “Would your opinion change if you were told that “Welfare” is just another term for “Assistance to the poor” ?
@董园
@董园 7 ай бұрын
Is there 5 other possible graphs in the Markov Equivalence class of the final question?
@peasant12345
@peasant12345 7 ай бұрын
very interesting examples
@ear_w0rm
@ear_w0rm 7 ай бұрын
Turn on auto-subtitles please
@ear_w0rm
@ear_w0rm 7 ай бұрын
Please tell me if I understood the idea correctly - the syntax using do describes the same thing as the syntax using potential outcome. It's just a different way of presenting the same problem (identification)
@ear_w0rm
@ear_w0rm 7 ай бұрын
it's amazing
@ear_w0rm
@ear_w0rm 8 ай бұрын
At 2:53, we can prove all this using the minimality assumption, right?