## #StackBounty: #methodology #observational-study #research-design Prospective study and ascertainment of the exposure during the follow-…

### Bounty: 100

This question landed here after two migration requests from "Medical Sciences" and "Operations Research"

I’m reading several prospective studies published in top journals where a baseline modifiable exposure was associated with a final outcome.
Just as an example might be this study.

The Authors aimed to evaluate whether oral hygiene behaviour can alleviate cardiovascular risk. For what I understand, the exposure persistency (oral hygiene behaviour) was not controlled during the study period, because any repeated assessment was performed during the follow-up.
Basically, they associated the final outcome (10 years later) to the baseline exposure.

1. Is that methodologically correct?
2. In a prospective cohort study, is there any need to control the exposure during the study period?

Get this bounty!!!

## #StackBounty: #social-psychology #methodology #experimental-psychology Terror management theory: Is this a valid conclusion based on th…

### Bounty: 50

I am currently reading the book ‘The Worm at the Core: On the Role of Death in Life’. I have to say that I am a bit skeptical about many of the statements posed, and I’m trying to identify why that is, so I decided to dive into some of the articles they reference.

One of the things I encountered was a reference to a paper by Schimel et al. (2007): “Is Death Really the Worm at the Core? Converging Evidence That Worldview Threat Increases Death-Thought Accessibility.” In Study 5, they state:

The results of Study 5 demonstrate, once again, that exposure to
worldview-threatening information causes thoughts of death to become
more accessible

They base this on an experiment in which they compare the DTA (death-thought accesibility) in three groups:

• creationist/anti-creation, creationists who read anti-creation material (n=20, M=2.75)
• evolutionist/anti-creation, evolutionists who read anti-creation material (n=20, M=1.95)
• creationist/control, creationists who read neutral material (n=20, M=1.90)

I have trouble to see the validity of the conclusion based on the experiment. Might it not also just be the case that creationists in general think more about death? And that reading a passage about the theory of evolution (which is inherently about life and death), makes everyone, regardless of their point of view on evolution, think more about death? I think the problem with this experiment is that there is a group missing in the experiment; the evolutionist/control group. It might be very well possible that this group has a DTA of 0.95 and a low SD. If that were the case, reading the passage has increased the DTA of both the creationists and the evolutionists and thus nothing can be said about the effect of worldview-threatening information. All we would then conclude, is that reading about evolution increases DTA.

I am not a researcher myself, nor am I very familiar with social sciences. I hope someone with more experience in these fields could explain to me if my train of thought is correct, or if not, where the fallacy in this train of thought is.

I would also be interested to find references to articles that contain experiments that substantiate the statement cited above.

Solomon, S., Greenberg, J., & Pyszczynski, T. (2015). The worm at the core: On the role of death in life. Random House.
Schimel, J., Hayes, J., Williams, T., & Jahrig, J. (2007). Is death really the worm at the core? Converging evidence that worldview threat increases death-thought accessibility. Journal of personality and social psychology, 92(5), 789. (Free PDF)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!

## #StackBounty: #hypothesis-testing #effect-size #power-analysis #intuition #methodology What's the difference between using a compos…

### Bounty: 50

Probably a weird place to quote, but I just stumbled across this discussion on Reddit, and one of them said (with typos fixed):

The effect size in a power calculation has no relationship whatsoever to the minimal effect size under a non-point null. These are two completely unrelated. If you consider an effect size in your power calculation against a point null, you’re still testing a point null.

(……) Of course you can have a null of zero in a power simulation. You usually do. You compute power at the alternative, but the test statistic is computed under the assumption of the null. You shouldn’t be running around changing your null.

(……) You can use composite nulls, but this is in general very uncommon. Using a composite null is also different than running a power calculation for minimal effect size.

As indicated, the most obvious difference between (A) using a composite null and (B) running a power calculation for minimal effect size is that the sample distributions of statistics are calculated under the assumption of their respective null hypothesis. However, I’m not sure if it’s the only difference. More importantly, what’s their implications, and when should we use (A), (B), or both?

To put it more concretely, given that I specify the expected minimum effect size as “the actual difference in means is at least $$0.8sigma$$“, what the difference among the following configurations?

begin{align} H_0&: mu = 0 & H_1&: mu ne 0 tag 1\ H_0&: -0.8sigma le mu le 0.8sigma & H_1&: mu ne 0 tag 2\ H_0&: mu = 0 & H_1&: mu < -0.8sigma lor mu > 0.8sigma tag 3 end{align}

By the way, I’m not familiar with the terms “power analysis” and “effect size”, so please don’t assume me a strong background. Believe it or not, my professor didn’t even mention them when teaching us hypothesis testing. (Yeah, as a Statistics major, this really sucks.)

Get this bounty!!!