There is an observable $x$. It was first measured for time $T$ under condition $A$, then for time $T$ under condition $B$. Measurements were performed with small time intervals $Delta t$. It is known $x$ is autocorrelated, namely, future values depend on the past values. It may be assumed that $x$ is effectively indepentent of its past beyond certain known time interval $tau$. It is known that $tau ll T$. The goal is to test if the expected value of $x$ is the same under both conditions or not. The question is how to best perform such a test. For this particular question I am interested in non-parametric methods, to be able to deal with the cases where the explicit model of how $x$ depends on its past is unknown.
I frequently see this problem solved by use of rank-sum test, however, the pre-processing varies:
- Idea 1: Use all datapoints for testing. Obviously bad, because test assumes i.i.d.
- Idea 2: Average time over conditions. Coherent, but extremely wasteful.
- Idea 3: Select timepoints at interval $tau$ from each other. Coherent, but again very wasteful, as we will not even use most of our datapoints in the analysis
- Idea 4: Split data into time-bins of length $tau$, average over each bin. Ok-ish, although consecutive bins are still correlated
- Idea 5: Same as 4, but also omit every second bin. This is probably the best I can come up with from the top of my head.