6.5 - Doubly Robust Methods, Matching, Double Machine Learning, and Causal Trees

  Рет қаралды 15,297

Brady Neal - Causal Inference

Brady Neal - Causal Inference

Күн бұрын

Пікірлер: 9
@RayRay-yt5pe
@RayRay-yt5pe 3 ай бұрын
For real man, thank you so much for everything you have done!!. You have made this stuff really accessible for folks like me!
@sichenghao3747
@sichenghao3747 3 жыл бұрын
Underrated channel!
@xyztnull5799
@xyztnull5799 4 жыл бұрын
Thank you very much for the course! I learned a lot here. For double machine learning (DML), I've been reading papers recently. My understanding, which might be incorrect, is that the key point is that stage 1 and stage 2 use two different samples of data. That's, split the whole data into at least 2 samples, then use the first sample of data to estimate Y_hat(.) and T_hat(.) in stage 1; in stage 2, apply the functions Y_hat(.) and T_hat(.) on the second sample of data and then estimate ATE (which is ATE_1). After that, switch sample 1 and 2, and repeat stage 1 and stage 2, you get ATE_2. Finally, average ATE_1 and ATE_2. This data-splitting distinguishes the DML from any other plug-in machine learning estimators. Hope my understanding be correct. Thank you for your excellent course again!
@BradyNealCausalInference
@BradyNealCausalInference 4 жыл бұрын
That all sounds right. However, you can do sample splitting without estimators as well. For example, you can fit the model for a COM estimator with one sample and then evaluate the estimate with another sample. This kind of sample splitting is also heavily emphasize in the causal trees/forests papers as well. And, as an aside that I'm guessing you already know, it doesn't have to be two samples and then an average between the two; it can be done with k "folds" just like k-fold cross validation.
@catragmitch
@catragmitch Жыл бұрын
Hi Brady, Thanks a lot for the video, very informative. I have a question around the nuances between DR and DML - what are the benefits/drawbacks of one over the other? Although I feel I understand the implementation, I am missing any intuition on when to use them, or how their theoretical "guarantees" differ. Thanks in advance!
@michelspeiser5789
@michelspeiser5789 Жыл бұрын
That causal tree example though 😂
@paulhowrang
@paulhowrang 2 жыл бұрын
Wonderful lectures....goldmine
@RobertWF42
@RobertWF42 3 жыл бұрын
Thanks Brady! If *both* the conditional outcome *and* propensity score models are inconsistent, how well does the doubly robust model perform?
@RobertKwapich
@RobertKwapich 3 жыл бұрын
For Double Machine Learning, what happens when we want to estimate the heterogenous treatment effects? The way I understand it, it that we'd have additional variable "X" that impacts (has a directed edge) on Y, and no edge to T. In stage 1 would we fit a model to predict Y from both W and X?
7.1 - Unobserved Confounding, Bounds Intro, and Lecture Outline
4:41
Brady Neal - Causal Inference
Рет қаралды 5 М.
Causal Inference - EXPLAINED!
15:32
CodeEmporium
Рет қаралды 68 М.
How Strong is Tin Foil? 💪
00:25
Brianna
Рет қаралды 61 МЛН
When u fight over the armrest
00:41
Adam W
Рет қаралды 14 МЛН
🕊️Valera🕊️
00:34
DO$HIK
Рет қаралды 19 МЛН
Average Treatment Effects: Double Robustness
41:43
Stanford Graduate School of Business
Рет қаралды 11 М.
Propensity scores: Everything you need to know in 5min
6:49
Michael Fralick
Рет қаралды 87 М.
Causal Trees
3:54
Madina Kurmangaliyeva
Рет қаралды 4 М.
#CodeChella 2021 - Santanna and Zhao (doubly robust) video 4
36:55
Scott Cunningham
Рет қаралды 3,7 М.
Causal Inference with Machine Learning - EXPLAINED!
16:09
CodeEmporium
Рет қаралды 41 М.
All Machine Learning algorithms explained in 17 min
16:30
Infinite Codes
Рет қаралды 305 М.
Decision Trees are more powerful than you think
21:44
CodeEmporium
Рет қаралды 10 М.
Philipp Bach and Sven Klaassen: Tutorial on DoubleML for double machine learning in Python and R
1:00:47