## #StackBounty: #time-series #stationarity Can stationary time series contain regulary cycles and periods with different fluctuations

### Bounty: 50

I just started trying to undestand the notion of stationary in time series. Basically I have 2 questions:

1. Can stationary time series contain regulary cycles and thus seasonality patterns? For exmaple in this tutorial it is stated that stationary time series can not have seasonal components (predicitable cycles) https://otexts.com/fpp2/stationarity.html whereas in this figures (https://i.imgur.com/3lKCxEn.png) the green time series that clearly has cycles (and thus seasonality) is labeled as ‘stationary’
(and I have seen these kind of figures quite often if you just google ‘stationary time series)
2. Can a stationary time series have periods with no fluctuations and periods with high fluctuations? As far as I understood the variance and the (aut)covarianz should not change over time making such a time series not stationary. But here in this picture (https://www.researchgate.net/profile/Hazrat_Ali3/publication/326619835/figure/fig10/AS:654171351044097@1532978012116/Non-stationary-and-stationary-time-series-As-CDR-activities-of-users-are-aggregated-on.png) the time series below is labeld as stationary altough it has periods with changing fluctuations.

I hope you can help me as I am confused about the concept of stationarity. I’d appreciate every comment.

Update: Someone has already given his/her comment stating the impression that the time series in the links that are labeled as stationary are not stationary (but those statements were made with uncertainties as the plots are sketchy; see comments below). I’d like to hear further impressions about the 2 questions. I’d be happy if you could state your point of views.

Get this bounty!!!

## #StackBounty: #time-series #stochastic-processes #stationarity #asymptotics #moving-average Is the MA(\$infty\$) process with i.i.d. noi…

### Bounty: 100

I have a MA($$infty$$) process defined by
$$X_t = sum_{k=0}^infty alpha_{k} epsilon_{t-k}, qquad tinmathbb{Z}$$
where the sums converge a.s. and the $$epsilon_t$$ are i.i.d. centered noise with Var($$epsilon_t$$) = $$sigma^2< infty$$.

There are plenty of proofs that this process is weakly stationary in the literature.

Is this process strictly stationary?

Get this bounty!!!

## #StackBounty: #interaction #panel-data #stationarity #unit-root what should I do about a non-stationary variable in a panel-data intera…

### Bounty: 50

We have panel data on immigration stocks, immigration flows, and immigration policy for 30 countries and 10-30 years. We would like to test the theory that the effect of immigration flows (i.e., annual numbers of incoming immigrants as % of pop) on immigration policy depends on immigrant stocks (i.e., non-citizens as % of pop). In other words, immigration flows affect policy, but only when there are few existing immigrants to begin with.

It seems to me that an interaction between immigration stocks and flows will allow a test of this theory. However, while our dependent variable (immigration policy) and our main independent variable (immigration flows) appear to be stationary, immigrant stocks is not. Standard solutions like first-differencing immigrant stocks won’t help because that would transform stocks into another measure of annual flows, which will not allow us to test the theory.

Another way of putting this is to ask: does stationarity matter only for the dependent variable? Or also for all independent variables?

Advice on how to proceed will be greatly appreciated!

Get this bounty!!!

## #StackBounty: #time-series #stationarity #sas #unit-root #kpss-test Evaluating the importance of a unit-root

### Bounty: 100

I have a monthly time series and I’m trying to determine if such set of data is stationary or not; the dataset is about composed by 160 record.

Specifically, I’m running 2 test found in literature:

1. KPSS: if $$H_0$$ has been rejected then one cannot assume the time series is stationary;
2. Phillips-Perron test: if $$H_0$$ has been rejected then one cannot assume that the time series has a unit-root (then it is stationary);

I preferred to implement the Phillips-Perron test in place of the most common Augmented Dickey-Fuller test since the Phillips-Perron test adjusts for the heteroschedasticity and serial correlation.

Here below, one can find the output of such analysis.

The KPSS test returns not significant p-values both for single-mean, implying that you cannot infer that the time series is not stationary; likewise, the Phillips-Perron test returns significant p-values for the single-mean and trend component, but not for the zero-mean case.

How should I consider or interpret such result?

I wonder if one can evaluate the importance and the strength of such unit-root; for instance, in the question the user @ferdi deals with the variance ratio test to argue the framework to evaluate the importance of a unit root in a time series.

Could you suggest some reference about?

I’m currently running the analysis in SAS, but any programming language would be nice.

Get this bounty!!!

## #StackBounty: #time-series #estimation #stationarity #garch #estimators Moving estimators for nonstationary time series, like loglikeli…

### Bounty: 50

While in standard (“static”) e.g. ML estimation we assume that all values are from a distribution of the same parameters, in practice we often have nonstationary time series: in which these parameters can evolve in time.

It is usually considered by using sophisticated models like GARCH conditioning sigma with recent errors and sigmas, or Kalman filters – they assume some arbitrary hidden mechanism.

I have recently worked on a simpler and more agnostic way: use moving estimator, like loglikelihood with exponentially weakening weights of recent values:
$$theta_T=argmax_theta l_Tqquad textrm{for}qquad l_T= sum_{t
intended to estimate local parameters, separately on each position. We don’t assume any hidden mechanism, only shift the estimator.

For example it turns out that EPD (exponential power distribution) family $$rho(x) propto exp(-|x|^kappa)$$, which covers Gaussian ($$kappa=2$$) and Laplace ($$kappa=1$$) distributions, can have cheaply made such moving estimator (plots below), getting much better loglikelihood for daily log-returns of Dow Jones companies (100 years DJIA, 10 years individual), even exceeding GARCH: https://arxiv.org/pdf/2003.02149 – just using the $$(sigma_{T+1})^kappa=eta (sigma_{T})^kappa+(1-eta)|x-mu|^kappa$$ formula: replacing estimator as average with moving estimator as exponential moving average:

I have also MSE moving estimator for adaptive least squares linear regression: page 4 of https://arxiv.org/pdf/1906.03238 – can be used to get adaptive AR without Kalman filter, also analogous approach for adaptive estimation of joint distribution with polynomials: https://arxiv.org/pdf/1807.04119

Are such moving estimators considered in literature?

What applications they might be useful for?

Get this bounty!!!

## #StackBounty: #arima #stationarity #box-jenkins calculating truncated infinite AR weights in practice for Arima model

### Bounty: 50

I am trying to figure out how to form the truncated infinite AR weights for a general time series process.

$$(1 – phi_1 B – phi_2 B^2 – … – phi_p B^p)(1 – B)z_t = (1-theta_1 B – … – theta_q B^q)a_t$$

Where $$phi text{ and } theta$$ are some constants, $$z_t$$ is a series of measurements, $$a_t$$ is noise, and $$B$$ is the backshift operator $$B * z_t = z_{t-1}$$.

Let

$$phi(B) = (1 – phi_1 B – phi_2 B^2 – … – phi_p B^p)\ theta(B) = (1-theta_1 B – … – theta_q B^q)\ rho(B) = phi(B) * (1-B)$$

Then this model can also be represented as :
$$rho(B)z_t = theta(B)a_t$$

It is known that these models, if stationary, can be represented by an infinite AR series:
$$Psi(B) = frac{theta(B)}{rho(B)} = frac{1-theta B}{(1-phi B)(1-B)} = sum_0^infty psi_iB^i$$

But, $$Psi(B)$$ does not converge if the difference operator $$(1-B)$$ is included in $$rho(B)$$, and according to “Time Series Analysis, Forecasting and Control”(Box, Jenkins), this series is only valid if we use the “truncated form” of the model.

The truncated form consists of a sum of a homogeneous and complementary solution with respect to some time point $$k$$:
$$z_t = C_k(t-k) + I_k(t-k)\ rho(B)C_k(t-k)=0\ rho(B)I_k(t-k) = (1-theta B)a_t\$$

Here, $$C_k$$ is the homogeneous solution, and $$I_k$$ is the complementary solution, where $$I_k(t-k) = 0$$ when $$tleq k$$, and $$k$$ represents a value where the series is truncated, such as the first data point in some time series.

1. How would one calculate the complementary function in practice?

The book gives the formula
$$I_k(t-k) = a_t + psi_1 a_{t-1} + … + psi_{t-(k+1)} a_{k+1}\ I_k(s-k) = 0 text{ if } s leq k$$

I was thinking I should write out the equation:
$$rho(B)I_k(t-k) = theta(B)a_{t-k} = (1-theta_1 B – theta_2 B^2 – … – theta_q B^q) a_{t-k}$$
Then, possibly calculate $$I_k(t-k)$$ for every data point with $$t geq k$$, equate coefficients of the left and right hand side, and solve using least squares if I have more data points than unknowns?

1. How do I calculate the homogeneous solution in practice?

The book gives the formula

$$C_k(t-k) = G_0^{t-k}sum_{j=0}^{d-1}A_j(t-k)^j + sum_{j=1}^p D_j G_j^{t-k}\ text{where } rho(B) = (1-G_1)(1-G_2)…(1-G_p)(1-G_0)^d$$

Here, $$rho(B)$$ has been factored so that you can see it’s roots $$(G_i)$$. This is a general formula given for a case when one factor of $$rho(B)$$ repeats $$d$$ times

In this case, should I use a root solving algorithm to find the roots of $$rho(B)$$? If so, would I then form a system of equations with 1 equation for each time $$t$$, and solve that system for the $$A_j$$ and $$D_j$$, probably using least squares since more equations than unknowns (1 for each data point)?

Get this bounty!!!

## #StackBounty: #time-series #modeling #econometrics #stationarity #seasonality Decomposing U.S. Imports of Goods by Customs Basis from C…

### Bounty: 50

I’m working on a project which aims at analyzing the dataset U.S. Imports of Goods by Customs Basis from China (IMPCH). The main point is to make some prediction but we also want to do a little inference from the data.

As far as I know, the first step is to decompose the time series. There are at least five decomposition methods and the residual (remainder) of all of them passes the unit root tests, indicating the results are stationary, which is desired. However, now I have to pick one of them to proceed, but all seem good.

Which one should I choose, `decompose`, `HoltWinters`, `forecast::ets`, `stl`, or `seasonal::seas`?

``````# dataset
# https://fred.stlouisfed.org/series/IMPCH

imp <- IMPCH\$IMPCH
imp.ts <- ts(imp, frequency = 12, start = 1985)
log.imp.ts <- log(imp.ts)
ts.plot(log.imp.ts)
``````

``````## Decomposition
library(tseries)

# Stock decompose
log.imp.ts.dcps <- decompose(log.imp.ts)
plot(log.imp.ts.dcps)
log.imp.ts.dcps.remainder <- na.remove(log.imp.ts.dcps\$random)
pp.test(log.imp.ts.dcps.remainder)
kpss.test(log.imp.ts.dcps.remainder)
``````

``````# Holt-Winters Filtering
log.imp.ts.hw <- HoltWinters(log.imp.ts)
plot(log.imp.ts.hw)
log.imp.ts.hw.remainder <- resid(log.imp.ts.hw)
pp.test(log.imp.ts.hw.remainder)
kpss.test(log.imp.ts.hw.remainder)
``````

``````# ETS
library(forecast)
log.imp.ts.ets <- ets(log.imp.ts)
plot(log.imp.ts.ets)
log.imp.ts.ets.remainder <- resid(log.imp.ts.ets)
pp.test(log.imp.ts.ets.remainder)
kpss.test(log.imp.ts.ets.remainder)
``````

``````# STL
log.imp.ts.stl <- stl(log.imp.ts, s.window = "periodic", robust = TRUE)
plot(log.imp.ts.stl)
log.imp.ts.stl.remainder <- log.imp.ts.stl\$time.series[, "remainder"]
pp.test(log.imp.ts.stl.remainder)
kpss.test(log.imp.ts.stl.remainder)
``````

``````# X-13ARIMA-SEATS
library(seasonal)
log.imp.ts.seas <- seas(log.imp.ts)
plot(log.imp.ts.seas)
log.imp.ts.seas.remainder <- resid(log.imp.ts.seas)