#StackBounty: #chi-squared #binning #application How Do You Choose The Number of Bins To Use For A Chi-Squared GOF Test?

Bounty: 50

I’m working on developing a physics lab about radioactive decay, and in analyzing sample data I’ve taken, I ran into a statistics issue that surprised me.

It is well known that the number of decays per unit time by a radioactive source is Poisson distributed. The way the lab works is that students count the number of decays per time window, and then repeat this many many times. Then they bin their data by the number of counts, and do a $chi^2$ goodness of fit test with 1 parameter estimated (the mean) to check whether or not the null hypothesis (the data is drawn from a Poisson distribution with the estimated mean value) holds. Hopefully they’ll get a large p-value and conclude that physics indeed works (yay).

I noticed that the way I binned my data had a large effect on the p-value. For example, if I chose lots of very small bins (e.g. a separate bin for each integer: 78 counts/min, 79 counts/min, etc.) I got a small p-value, and would have had to reject the null hypothesis. If, however, I binned my data into fewer bins (e.g. using the number of bins given by Sturge’s Rule: $1+log_{2}(N)$), I got a much larger p-value, and did NOT reject the null hypothesis.

Looking at my data, it looks extremely Poisson-distributed (It lines up almost perfectly with my expected counts/minutes). That said, there are a few counts in bins very far away from the mean. That means when computing the $chi^2$ statistic using very small bins, I have a few terms like:
$$frac{(Observed-Expected)^2}{Expected} = frac{(1-0.05)^2}{0.05}=18.05$$
This leads to a high $chi^2$ statistic, and thus a low p-value. As expected, the problem goes away for larger bin widths, since the expected value never gets that low.

Questions:

Is there a good rule of thumb for choosing bin sizes when doing a $chi^2$ GOF test?

Is this discrepancy between outcomes for different bin sizes something that I should have known about*, or is indicative of some larger problem in my proposed data analysis?


Thank you

*(I took a stats class in undergrad, but it’s not my area of expertise.)


Get this bounty!!!

#StackBounty: #distributions #statistical-significance #chi-squared Describing Specialization

Bounty: 50

I am trying to formalize an observed trend. To simplify what I am trying to do, suppose a dataset of salesmen selling items A, B, C, D. In the year 1950, the proportion of each sold is (0.2, 0.2, 0.1, 0.5), however this shifts to (0.4, 0.1, 0.1, 0.4) in 2000 (or some other statistically significant shift) – now this can be shown with a chi-square test. Important to note, the group of salesmen in 1950 is not the same as in 2000.

Now what I would like to show is that taking into account the shift in product sales, we witness a specialization in certain products, ie whereas in 1950 salesmen would sell all products fairly equally, in 2000, salesmen increasingly focus on certain products (for example, whereas in 1950 an average salesperson may have a distribution of sales more or less representative of the sales overall – 0.2, 0.2, 0.1, 0.5; a 2000 salesperson may have 0.9 of A and 0.03 of each of the others).

I was wondering how one would go about this? Would it be appropriate to compare the top 10% of sales people in each product and show increasing discrepency in how they sell when compared to a normalized supposed salesperson? Is there a more standard way of doing this?

Any help would be very much appreciated.


Get this bounty!!!