#StackBounty: #econometrics #instrumental-variables #treatment-effect #derivative #marginal-effect How to calculate and interpret a mar…

Bounty: 100

I am working on the intuition behind local instrumental variables (LIV), also known as the marginal treatment effect (MTE), developed by Heckman & Vytlacil. I have worked some time on this and would benefit from solving a simple example. I hope I may get input on where my example goes awry.

As a starting point the standard local average treatment effect (LATE) is the treatment among individuals induced to take treatment by the instrument ("compliers"), while MTE is the limit form of LATE.

A helpful distinction between LATE and MTE is found between the questions:

  • LATE: What is the difference in the treatment effect between those who are more likely to receive treatment compared to others?
  • MTE: What is the difference in the treatment effect between those who are marginally more likely to receive treatment compared to others?

In revised form, the author states:

LATE and MTE are similar, except that LATE examines the
difference in outcomes for individuals with different average
treatment probability whereas MTE examines the derivative.
More specifically, MTE aims to answer what the is the average
effect for people who are just indifferent between receiving treatment
or not at a given value of the instrument.

The use of "marginally" and "indifferent" is key and what it specifically implies in this context eludes me. I can’t find an explanation for what these terms imply here.

Generally, I am used to thinking about the marginal effect as the change in outcome with a one unit change in the covariate of interest (discrete variable) or the instantenous change (continuous variable) and indifference in terms of indifference curves (consumer theory).

Aakvik et al. (2005) state:

MTE gives the average effect for persons who are indifferent between
participating or not for a given value of the instrument … [MTE] is
the average effect of participating in the program for people who are
on the margin of indifference between participation in the program
$D=1$ or not $D=0$ if the instrument is externally set … In brief,
MTE identifies the effect of an intervention on those induced to
change treatment states by the intervention

While Cornelissen et al. (2016) writes:

… MTE is identified by the derivate of the outcome with respect to
the change in the propensity score

From what I gather the MTE is, then, the change in outcome with the change in the probability of receiving treatment, although I am not sure if this is correct. If it is correct I am not sure how to argue for policy or clinical relevance.

Example

To understand the mechanics and interpretation of MTE, I have set up a simple example that starts with the MTE estimator:

$MTE(X=x, U_{D}=p) = frac{partial E(Y | X=x, P(Z)=p)}{partial p}$

Where $X$ is covariates of interest, $U_{D}$ is the "unobserved distaste for treatment" (another term frequently used but not explained at length), $Y$ is the outcome, and $P(Z)$ is the probability of treatment (propensity score). I apply this to the effect of college on earnings.

We want to estimate the MTE of college ($D=(0,1)$) on earnings ($Y>0$), using the continous variable distance to college ($Z$) as the instrument. We start by obtaining the propensity score $P(Z)$, which I read as equal to the predicted value of treatment from the standard first stage in 2SLS:

$ D= alpha + beta Z + epsilon$

$=hat{D}=P(Z)$

Now, to understand how to specifically estimate MTE, it would be helpful to think of the MTE for a specific set of observations defined by specific values of $X$ and $P(Z)$. Suppose there is only one covariate ($X$) necessary to condition on and that for the specific subset at hand we have $X=5$ and $P(Z)=.6$. Consequently, we have

$MTE(5, .6) = frac{partial E(Y | X=5, P(Z)=.6)}{partial .6}$

Suppose further that $Y$ for the subset of observations defined by $(X=5,P(Z)=.6)$ is 15000,

$MTE(5, .6) = frac{partial 15000}{partial .6}$

Question

My understanding of this partial derivate is that the current set up is invalid, and substituting $partial .6$ with $partial p$ would simply result in 0 as it would be the derivate of a constant. I therefore wonder whether anyone has input on where I went wrong, and how I might arrive at MTE for this simple example.

As for the interpretation, I would interpret the MTE as the change in earnings with a marginal increase in the probability of taking college education among the subset defined by $(X=5,P(Z)=.6)$.


Get this bounty!!!

#StackBounty: #r #categorical-data #interaction #instrumental-variables #2sls A 2SLS when the instrumented variable has two interaction…

Bounty: 50

I am using ivreg and ivmodel in R to apply a 2SLS.

I would like to instrument one variable, namely $x_1$, present in two interaction terms. In this example $x_1$ is a factor variable. The regression is specified in this manner because the ratio between $a$ and $b$ is of importance.

$$y = ax_1 x_2 + bx_1x_3 + cx_4 + e$$

For this instrumented variable I have two instruments $z_1$ and $z_2$. For both the following causal diagram is applicable (Z only has an indirect effect on Y through X).

enter image description here

What is for this problem the correct way to instrument $x_1$?

In the data

Translated to some (fake) sample data the problem looks like:

$$happiness = a(factor:income) + b(factor:sales) + c(educ) + e$$
$$=$$
$$(y = ax_1 x_2 + bx_1x_3 + cx_4 + e)$$

Where the instrument $z_1$ is urban and $z_2$ is size. Here I however become to get confused about how to write the regression.

For the first stage:

What is my dependent variable here?

For the second stage, should I do:

$$happiness = a(urban:income) + b(urban:sales) + c(educ) + e$$
$$happiness = a(size:income) + b(size:sales) + c(educ) + e$$

Or should I just do:

$$happiness = urban(a:income+b:sales) + c(educ) + e$$
$$happiness = size
(a:income+b:sales) + c(educ) + e$$

Nevertheless, how should I specify this in R ?

library(data.table)
library(ivmodel)
library(AER)
panelID = c(1:50)   
year= c(2001:2010)
country = c("NLD", "BEL", "GER")
urban = c("A", "B", "C")
indust = c("D", "E", "F")
sizes = c(1,2,3,4,5)
n <- 2
library(data.table)
set.seed(123)
DT <- data.table(panelID = rep(sample(panelID), each = n),
                    country = rep(sample(country, length(panelID), replace = T), each = n),
                    year = c(replicate(length(panelID), sample(year, n))),
                    some_NA = sample(0:5, 6),                                             
                    Factor = sample(0:5, 6), 
                    industry = rep(sample(indust, length(panelID), replace = T), each = n),
                    urbanisation = rep(sample(urban, length(panelID), replace = T), each = n),
                    size = rep(sample(sizes, length(panelID), replace = T), each = n),
                    income = round(runif(100)/10,2),
                    Y_Outcome= round(rnorm(10,100,10),2),
                    sales= round(rnorm(10,10,10),2),
                    happiness = sample(10,10),
                    Sex = round(rnorm(10,0.75,0.3),2),
                    Age = sample(100,100),
                    educ = round(rnorm(10,0.75,0.3),2))        
DT [, uniqueID := .I]                                                         # Creates a unique ID     
DT <- as.data.frame(DT)

To make it slightly easier for someone to help who is not familiar with the packages, I have added how the structure of the two packages I use looks.

The structure of the second stage of ivreg is as follows:

second_stage <- ivreg(Happiness ~ factor:income + factor:sales + educ | urban:income + urban:sales + educ, data=DT)

The structure for ivmodel is:

second_stage<- ivmodel(Y=DT$Happiness,D=DT$factor,Z=DT[,c("urban","size")],X=DT$educ, na.action = na.omit) 

Any help with figuring out how to do this properly would be greatly appreciated!


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #r #stata #instrumental-variables #endogeneity #hausman What are the differences between tests for overidentification in …

Bounty: 50

I am using 2SLS for my research and I want to test for overidentification. I started out with the Hausman test of which I have a reasonable grasp.

The problem I have is that from the Hausman and the Sargan Test I am getting very different results.

The Sargan test is done by ivmodel from library(ivmodel). I copied the Hausman test from “Using R for Introductory Econometrics” page 226, by Florian Heiss.

[1] "############################################################"
[1] "***Hausman Test for Overidentification***"
[1] "############################################################"
[1] "***R2***"
[1] 0.0031
[1] "***Number of observations (nobs)***"
[1] 8937
[1] "***nobs*R2***"
[1] 28
[1] "***p-value***"
[1] 0.00000015


Sargan Test Result:

Sargan Test Statistics=0.31, df=1, p-value is 0.6

On top of this I am also using ivtobit from Stata, which provides a Wald test of exogeneity.

Lastly I read about a fourth which is the Hansen J statistic.

What is the difference between all of these tests?


Get this bounty!!!

#StackBounty: #econometrics #instrumental-variables #matching #random-allocation Matching / Scoring in an Experiment, using instrumenta…

Bounty: 50

Maybe this is better suited here, than in economics. I don’t know and please excuse the Econ language, I cannot do any better.

I did an experiment with random assignment to two treatment groups and one control group.

I had problems to encourage participation in one treatment (treatment 1), so at the end, I had to randomly assign a lot of people into treatment 2 and control group.

I used the initial treatment assignment as an instrument for actual treatment assignment to have “LATE”s (local average treatment effects).

However, I do not find any statistically significant effects from treatment 1 (maybe due to smaller sample size, as some effects are truly economically meaningful but Standard errors SE are huge).

At a virtual conference, people encouraged me to use either entropy balancing or propensity score matching. However, I have not been able to find any examples where people use this with experiments or with instruments.

It seems to me as people use matching/balancing methods when they have no control group/ no experiment/ no instrument.

Can any of you help and provide hints, how (if) to use balancing or matching with instrumental variables/experiments.

Kindly thank you in advance!

(This is a crosspost from Economics StackExchange)


Get this bounty!!!