#StackBounty: #probability #panel-data #identifiability Static panel linear model

Bounty: 50

This question is about how to show identification of the fixed effects in a static panel linear model.

A1 (model): The model is
$$
Y_{it}=alpha_i+X_{it}^top beta+epsilon_{it}
$$

for each $i=1,…,N$ and $t=1,…,T$, where $i$ indexes individuals and $t$ indexes time periods.

A2 (data): We assume to have an i.i.d. sample of $N$ observations ${Y_{i1}, X_{i1},dots, Y_{iT}, X_{iT}}_{i=1}^N$ with $N$ large.

A3 (exogeneity): $E(epsilon_{it}| X_{i1},…, X_{iT}, alpha_i)=0$ for each $t=1,…,T$ and $i=1,…,N$.

Question: In the so called "fixed effect model", $alpha_1,…, alpha_N$ are treated as parameters (together with $beta$) and possibly estimated. How can we show that $(alpha_1,…, alpha_N, beta)$ are identified under A1, A2, A3?

Remark: I think that $T$ large is also needed to identify $alpha_1,…, alpha_N$. Feel free to add this assumption.


My thoughts and doubts:

I have found several sources discussing how to estimate $(alpha_1,…, alpha_N, beta)$ or how to identify/estimate $beta$ alone (by differencing out $alpha_1,…, alpha_N$), but no papers or books explaining the joint identification of $(alpha_1,…, alpha_N, beta)$.

I’m aware of the incidental parameter problem which prevents consistent estimation of $(alpha_1,…, alpha_N)$ when $T$ is fixed and $Nrightarrow infty$. Hence, I suppose that $(alpha_1,…, alpha_N)$ are not identified when $T$ is fixed and $Nrightarrow infty$. The incidental parameter problem disappears if we also let $Trightarrow infty$. Does this imply that identification of $(alpha_1,…, alpha_N)$ can be established? How?

In what follows, I report my incomplete attempt.

$Y_iequiv (Y_{i1},…, Y_{iT})$ and $X_iequiv (X_{i1},…, X_{iT})$. $K$ is the size of $beta$. I assume $NT> N+K$.

First, I rewrite the model as
$$
Y_{it}=sum_{ell=1}^Nalpha_l 1{i=ell}+X_{it}^top beta+epsilon_{it},
$$

where I consider $alpha_1,…, alpha_N$ as parameters and the index $i$ as a random variable. Second, I rewrite A3
$$
E(epsilon_{it}| i, X_{i1},…, X_{iT})=0,
$$

for each $i=1,…,N$ and $t=1,…,T$.

By A3, for each $i=1,dots, N$, there exists a realisation of the $Ttimes K $ matrix $X_{i}equiv (X_{i1},…, X_{iT})$ (which I denote by $x_{i}equiv (x_{i1},…, x_{iT})$) such that
$$
begin{cases}
E(epsilon_{i1}|i=1, X_{i} =x_1)=0 \
vdots\
E(epsilon_{iT}|i=1, X_{i} =x_1 )=0 \
vdots\
E(epsilon_{i1}|i=N, X_{i} =x_N )=0 \
vdots\
E(epsilon_{iT}|i=N, X_{i} =x_N )=0 \
end{cases}
$$

In turn, by A1,
$$
begin{cases}
E(Y_{i1}|i=1, X_{i} =x_1 )=alpha_1+beta x_{11} \
vdots\
E(Y_{iT}|i=1, X_{i} =x_1 )=alpha_1+beta x_{1T} \
vdots\
E(Y_{i1}|i=N, X_{i} =x_N )=alpha_N+beta x_{N1} \
vdots\
E(Y_{iT}|i=N, X_{i} =x_N )=alpha_N+beta x_{NT} \
end{cases}
$$

I can more compactly rewrite this system of equations as
$$
underbrace{Y}_{NTtimes 1}=overbrace{underbrace{begin{pmatrix}
D & X
end{pmatrix}}_{NT times (N+K)}}^{equiv Gamma} overbrace{underbrace{begin{pmatrix}
alpha \
beta
end{pmatrix}}_{(N+K)times 1}}^{equiv phi}.
$$

Next,
$$
Gamma^top Y= Gamma^top Gamma phi.
$$

Thus,
$$
phi=(Gamma^top Gamma)^{-1} Gamma^top Y
$$

under the assumption that $Gamma^top Gamma$ is invertible.

I would be done with the proof if I could claim that $Y$ is known for $N$ large under assumption A1. I don’t think this is the case, though. I suppose that somehow we also need large $T$, but I don’t see clearly how.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.