# #StackBounty: #regression #hypothesis-testing #diagnostic \$H_0\$ vs \$H_1\$ in diagnostic testing

### Bounty: 50

Consider diagnostic testing of a fitted model, e.g. testing whether regression residuals are autocorrelated (a violation of an assumption) or not (no violation). I have a feeling that the null hypothesis and the alternative hypothesis in diagnostic tests often tend to be exchanged/flipped w.r.t. what we would ideally like to have.

If are interested in persuading a sceptic that there is a (nonzero) effect, we usually take the null hypothesis to be that there is no effect, and then we try to reject it. Rejecting $$H_0$$ at a sufficiently low significance level produces convincing evidence that $$H_0$$ is incorrect, and we therefore are comfortable in concluding that there is a nonzero effect. (There are of course a bunch of other assumptions which must hold, as otherwise the rejection of $$H_0$$ may result from a violation of one of those assumptions rather than $$H_0$$ actually being incorrect. And we never have 100% confidence but only, say, 95% confidence.)

Meanwhile, in diagnostic testing of a model, we typically have $$H_0$$ that the model is correct and $$H_1$$ than there is something wrong with the model. E.g. $$H_0$$ is that regression residuals are not autocorrelated while $$H_1$$ is that they are autocorrelated. However, if we want to persuade a sceptic that our model is valid, we would have $$H_0$$ consistent with a violation and $$H_1$$ consistent with validity. Thus the usual setup in diagnostic testing seems to exchange $$H_0$$ with $$H_1$$, and so we do not get to control the probability of the relevant error.

Is this a valid concern (philosophically and/or practically)? Has it been addressed and perhaps resolved?

Get this bounty!!!

This site uses Akismet to reduce spam. Learn how your comment data is processed.