#StackBounty: #generalized-linear-model #econometrics #instrumental-variables #marginal-effect #ordered-probit Difference in intuition:…

Bounty: 50

this is a follow-up to this question.

I wanted to estimate using Stata’s cmp command a system of 2 equations: an ordered probit and a linear equation.

1 – linear model: $$y = alpha + beta z + epsilon_1$$.
2 – ordered probit: $$z^* = gamma x + epsilon_2 \ z = j quad alpha_{j-1} leq z^* leq alpha_j quad j in {-4, -3, dots 3, 4} $$

As the linked question asked, I wanted to derive marginal effects of $x$ on $y$. Since the margins command in Stata wouldn’t account for this indirect link between the two variables ($x to z to y$), I asked the package’s author for a possible alternative. He suggested using $z$‘s linear predictor on the first equation. (in case you know cmp, it would be as in cmp(y = x#) (x = z), vce(robust) ind($cmp_cont $cmp_oprobit) nolr).

This seems like using $z$ as an instrument for $x$. I have then a few questions:

1- Is that so?

2- What would be the difference in intuition from the approaches? Is there a way to think about which one fits best?


Get this bounty!!!

#StackBounty: #econometrics #intuition #instrumental-variables Intuitive understanding of instrumental variables for natural experiments

Bounty: 50

I am wondering if my understanding of Instrumental vairables to exploit natural experiments is correct, or if I am misunderstanding something.

Is the logic as follows: by using an instrument, you are now comparing the outcomes of those who recieved higher levels of treatment because they had higher exposure to the instrument to those who received lower levels of treatment because they had lower exposure to the instrument, but these latter units would have recieved higher treatment had they been more exposed to the instrument?

so should I think intuitively as if it is to some degree a random experiment on a subset of units?


Get this bounty!!!

#StackBounty: #econometrics #instrumental-variables #treatment-effect #derivative #marginal-effect How to calculate and interpret a mar…

Bounty: 100

I am working on the intuition behind local instrumental variables (LIV), also known as the marginal treatment effect (MTE), developed by Heckman & Vytlacil. I have worked some time on this and would benefit from solving a simple example. I hope I may get input on where my example goes awry.

As a starting point the standard local average treatment effect (LATE) is the treatment among individuals induced to take treatment by the instrument ("compliers"), while MTE is the limit form of LATE.

A helpful distinction between LATE and MTE is found between the questions:

  • LATE: What is the difference in the treatment effect between those who are more likely to receive treatment compared to others?
  • MTE: What is the difference in the treatment effect between those who are marginally more likely to receive treatment compared to others?

In revised form, the author states:

LATE and MTE are similar, except that LATE examines the
difference in outcomes for individuals with different average
treatment probability whereas MTE examines the derivative.
More specifically, MTE aims to answer what the is the average
effect for people who are just indifferent between receiving treatment
or not at a given value of the instrument.

The use of "marginally" and "indifferent" is key and what it specifically implies in this context eludes me. I can’t find an explanation for what these terms imply here.

Generally, I am used to thinking about the marginal effect as the change in outcome with a one unit change in the covariate of interest (discrete variable) or the instantenous change (continuous variable) and indifference in terms of indifference curves (consumer theory).

Aakvik et al. (2005) state:

MTE gives the average effect for persons who are indifferent between
participating or not for a given value of the instrument … [MTE] is
the average effect of participating in the program for people who are
on the margin of indifference between participation in the program
$D=1$ or not $D=0$ if the instrument is externally set … In brief,
MTE identifies the effect of an intervention on those induced to
change treatment states by the intervention

While Cornelissen et al. (2016) writes:

… MTE is identified by the derivate of the outcome with respect to
the change in the propensity score

From what I gather the MTE is, then, the change in outcome with the change in the probability of receiving treatment, although I am not sure if this is correct. If it is correct I am not sure how to argue for policy or clinical relevance.

Example

To understand the mechanics and interpretation of MTE, I have set up a simple example that starts with the MTE estimator:

$MTE(X=x, U_{D}=p) = frac{partial E(Y | X=x, P(Z)=p)}{partial p}$

Where $X$ is covariates of interest, $U_{D}$ is the "unobserved distaste for treatment" (another term frequently used but not explained at length), $Y$ is the outcome, and $P(Z)$ is the probability of treatment (propensity score). I apply this to the effect of college on earnings.

We want to estimate the MTE of college ($D=(0,1)$) on earnings ($Y>0$), using the continous variable distance to college ($Z$) as the instrument. We start by obtaining the propensity score $P(Z)$, which I read as equal to the predicted value of treatment from the standard first stage in 2SLS:

$ D= alpha + beta Z + epsilon$

$=hat{D}=P(Z)$

Now, to understand how to specifically estimate MTE, it would be helpful to think of the MTE for a specific set of observations defined by specific values of $X$ and $P(Z)$. Suppose there is only one covariate ($X$) necessary to condition on and that for the specific subset at hand we have $X=5$ and $P(Z)=.6$. Consequently, we have

$MTE(5, .6) = frac{partial E(Y | X=5, P(Z)=.6)}{partial .6}$

Suppose further that $Y$ for the subset of observations defined by $(X=5,P(Z)=.6)$ is 15000,

$MTE(5, .6) = frac{partial 15000}{partial .6}$

Question

My understanding of this partial derivate is that the current set up is invalid, and substituting $partial .6$ with $partial p$ would simply result in 0 as it would be the derivate of a constant. I therefore wonder whether anyone has input on where I went wrong, and how I might arrive at MTE for this simple example.

As for the interpretation, I would interpret the MTE as the change in earnings with a marginal increase in the probability of taking college education among the subset defined by $(X=5,P(Z)=.6)$.


Get this bounty!!!

#StackBounty: #r #categorical-data #interaction #instrumental-variables #2sls A 2SLS when the instrumented variable has two interaction…

Bounty: 50

I am using ivreg and ivmodel in R to apply a 2SLS.

I would like to instrument one variable, namely $x_1$, present in two interaction terms. In this example $x_1$ is a factor variable. The regression is specified in this manner because the ratio between $a$ and $b$ is of importance.

$$y = ax_1 x_2 + bx_1x_3 + cx_4 + e$$

For this instrumented variable I have two instruments $z_1$ and $z_2$. For both the following causal diagram is applicable (Z only has an indirect effect on Y through X).

enter image description here

What is for this problem the correct way to instrument $x_1$?

In the data

Translated to some (fake) sample data the problem looks like:

$$happiness = a(factor:income) + b(factor:sales) + c(educ) + e$$
$$=$$
$$(y = ax_1 x_2 + bx_1x_3 + cx_4 + e)$$

Where the instrument $z_1$ is urban and $z_2$ is size. Here I however become to get confused about how to write the regression.

For the first stage:

What is my dependent variable here?

For the second stage, should I do:

$$happiness = a(urban:income) + b(urban:sales) + c(educ) + e$$
$$happiness = a(size:income) + b(size:sales) + c(educ) + e$$

Or should I just do:

$$happiness = urban(a:income+b:sales) + c(educ) + e$$
$$happiness = size
(a:income+b:sales) + c(educ) + e$$

Nevertheless, how should I specify this in R ?

library(data.table)
library(ivmodel)
library(AER)
panelID = c(1:50)   
year= c(2001:2010)
country = c("NLD", "BEL", "GER")
urban = c("A", "B", "C")
indust = c("D", "E", "F")
sizes = c(1,2,3,4,5)
n <- 2
library(data.table)
set.seed(123)
DT <- data.table(panelID = rep(sample(panelID), each = n),
                    country = rep(sample(country, length(panelID), replace = T), each = n),
                    year = c(replicate(length(panelID), sample(year, n))),
                    some_NA = sample(0:5, 6),                                             
                    Factor = sample(0:5, 6), 
                    industry = rep(sample(indust, length(panelID), replace = T), each = n),
                    urbanisation = rep(sample(urban, length(panelID), replace = T), each = n),
                    size = rep(sample(sizes, length(panelID), replace = T), each = n),
                    income = round(runif(100)/10,2),
                    Y_Outcome= round(rnorm(10,100,10),2),
                    sales= round(rnorm(10,10,10),2),
                    happiness = sample(10,10),
                    Sex = round(rnorm(10,0.75,0.3),2),
                    Age = sample(100,100),
                    educ = round(rnorm(10,0.75,0.3),2))        
DT [, uniqueID := .I]                                                         # Creates a unique ID     
DT <- as.data.frame(DT)

To make it slightly easier for someone to help who is not familiar with the packages, I have added how the structure of the two packages I use looks.

The structure of the second stage of ivreg is as follows:

second_stage <- ivreg(Happiness ~ factor:income + factor:sales + educ | urban:income + urban:sales + educ, data=DT)

The structure for ivmodel is:

second_stage<- ivmodel(Y=DT$Happiness,D=DT$factor,Z=DT[,c("urban","size")],X=DT$educ, na.action = na.omit) 

Any help with figuring out how to do this properly would be greatly appreciated!


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!

#StackBounty: #large-data #instrumental-variables #hausman Interpretation of the Hausman test (overidentification in relation to IV&#39…

Bounty: 50

I am using survey data with a huge amount of observations, such as the World Value Surveys. Large sample sizes are obviously very nice, but I have have encountered some downsides as well.

To give an example, in almost every econometric model I specify, about 90% of the variables is highly significant. So I will have to decide whether, in addition to an estimate being statistically significant, it is also economically significant, which is not always an easy thing to do.

The biggest issue is however, that when resorting to Instrumental Variables, the Hausman test for over identification is always very, very, very significant. See to this extent THIS POST.

How do I deal with with this consequence of large sample sizes?

The only thing I can think of is to reduce the sample size. This however seems a very arbitrary way to get the test statistic down.


Get this bounty!!!