Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):
The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.
(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.
(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.
As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?
To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $0.8sigma$“, what the difference among the following configurations?
H_0&: mu = 0 & H_1&: mu ne 0 tag 1\
H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\
H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3
By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)