#StackBounty: #regression #econometrics #covariance #residuals #covariance-matrix Covariance matrix of the residuals in the linear regr…

Bounty: 50

I estimate the linear regression model:

$Y = Xbeta + varepsilon$

where $y$ is an ($n times 1$) dependent variable vector, $X$ is an ($n times p$) matrix of independent variables, $beta$ is a ($p times 1$) vector of the regression coefficients, and $varepsilon$ is an ($n times 1$) vector of random errors.

I want to estimate the covariance matrix of the residuals. To do so I use the following formula:

$Cov(varepsilon) = sigma^2 (I-H)$

where I estimate $sigma^2$ with $hat{sigma}^2 = frac{e’e}{n-p}$ and where $I$ is an identity matrix and $H = X(X’X)^{-1}X$ is a hat matrix.

However, in some source I saw that the covariance matrix of the residuals is estimated in other way.
The residuals are assumed to following $AR(1)$ process:

$varepsilon_t = rho varepsilon_{t-1} + eta_t$

where $E(eta) = 0$ and $Var({eta}) = sigma^2_{0}I$.

The covariance matrix is estimated as follows

$Cov(varepsilon) = sigma^2 begin{bmatrix}
1 & rho & rho^2 & … & rho^{n-1}\
rho & 1 & rho & … & rho^{n-2} \
… & … & … & … & … \
rho^{n-1} & rho^{n-2} & … & … & 1
end{bmatrix}$

where $sigma^2 = frac{1}{1-rho}sigma^2_0$

My question is are there two different specifications of the covariance matrix of residuals or these are somehow connected with each other?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.