I would like to explain why an endogenous breakpoint is a problem when testing for a difference in the mean between two subperiods in a time-series. Here are my questions.
- The breakpoint cannot ever be chosen by the researcher, right? Even if the researcher has a “strong feeling” they “know” the “right” breakpoint, this is still endogenous, right? What’s the difference between endogenous and exogenous, then?
- Quandt (1958, p. 877) describes the problem of an endogenous breakpoint as, “…the determination of t from the data reduces the degrees of freedom…of the variance ratio.” In other words, the critical value of the “usual” test statistic is too small. Is there an easy way to describe how or why this is so?
I would also like to re-frame the “endogenous breakpoint” problem as a “selection bias” problem in the Rubin Causal Model sense. In other words, there is selection bias unless the breakpoint is exogenous, e.g., found by the Quandt/Andrews supF approach. Is there a way to explain how an endogenous breakpoint creates selection bias and how or why the Quandt/Andrews supF approach overcomes the selection bias problem?