# #StackBounty: Relationship between I(0) variables in the presence of structural breaks

### Bounty: 50

Consider the following hipothetical data. \$A_t\$ is a time-series tested to be I(0) with one known structural break and \$B_t\$ is another time-series on the same data set also tested to be I(0) with a structural break in another period (not the same as \$A’s\$).

If I regress the following model:

\$\$A_t=beta_0+beta_1B_t+epsilon_t (1) \$\$

Using the theoritical model in \$(1)\$ I obtain an estimated residual (\$hat{epsilon_t}\$) that is actually tested to be I(1). In such situation, if my intention is to obtain a stable relationship between the variables, I should insert the dummies of the known structural breaks? As in the following new model, with \$DA_t\$ representing the dummy for \$A’s\$ structural break and \$DB_t\$ for \$B’s\$:

\$\$A_t=alpha_0+alpha_1B_t+gamma_1DA_t+gamma_2DB_t+u_t (2)\$\$

Edit: The comment bellow (by @ChrisHaug) mentions the way I am testing the existence of unit root in the residuals. If I have intuited adequately what @ChrisHaug was trying to say, I should also run a test considering structural breaks in the residuals. Say, if I obtain I(0) residuals in that situation, there will be no bias in the coefficients estimated by running OLS with \$(1)\$ as reference?

My intuitive guess still is that I should include the dummies for the structural breaks (of each variable) in the equation, as I included in \$(2)\$, to solve a possible bias. But what theory does say?

Get this bounty!!!