#StackBounty: #propensity-scores #treatment-effect #weights Stabilized propensity weights: intuition and ATT formula

Bounty: 50

The average treatment effect (ATE) of binary treatment T on outcome Y can be estimated using inverse propensity weights:

begin{equation}nonumber
frac{sum_{i=1}^{N}t_ihat{pi}i^{-1}y_i}{sum{i=1}^{N}t_ihat{pi}i^{-1}}-frac{sum{i=1}^{N}(1-t_i)(1-hat{pi}i)^{-1}y_i}{sum{i=1}^{N}(1-t_i)(1-hat{pi}_i)^{-1}} xrightarrow{p} E[Y^1]-E[Y^0]
end{equation}

where $hat{pi}_i$ is the estimated propensity for individual $i$.

To avoid extreme weights, there is a literature that suggests replacing the numerator of the treated weights with the marginal probability of treatment, $p(t=1)$, and the numerator of the control weights with $1-p(t=1)$. I see how this makes the weights milder, but why those particular numerators? What is the intution of this stabilization?

Also, the average treatment effect on the treated (ATT) can be estimated by weighting control units with the odds:

begin{equation}nonumber
frac{sum_{i=1}^{N}t_iy_i}{sum_{i=1}^{N}t_i}-frac{sum_{i=1}^{N}(1-t_i)hat{pi}i(1-hat{pi}_i)^{-1}y_i}{sum{i=1}^{N}(1-t_i)hat{pi}_i(1-hat{pi}_i)^{-1}} xrightarrow{p} E[Y^1|T=1]-E[Y^0|T=1]
end{equation}

How should one stabilize these $hat{pi}_i(1-hat{pi}_i)^{-1}$ weights?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.