#StackBounty: #hypothesis-testing #bayesian Hypothesis testing via separate inference for each group and then combining

Bounty: 50

Suppose there are two groups, A and B, and we are interested in inferring a certain parameter for each one and also the difference between the two parameters. Here we can take a Bayesian perspective and strive for a posterior distribution in each case. I am wondering if the following is a sound way of doing this:

  1. estimate the posterior for group A,
  2. estimate the posterior for group B, and
  3. estimate the posterior of the difference by sampling extensively the first two posteriors and taking the difference.

I am specifically unsure about this kind of divide-and-conquer approach where each group is treated separately, and then the results are combined. Usually, it is done in one take where, perhaps, a linear model is fitted with an indicator for the group membership.

Let me give a simple example. Say, the outcome is binary. One can then use a Bernoulli–beta model to infer the posterior of the success probability, which will be a beta distribution for each group. As the last step, one can sample the two betas and get a posterior for the difference.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.