*Bounty: 50*

*Bounty: 50*

Suppose I am doing a physical experiment and would like to measure the output (random variable). Inherently, I introduce measurement errors when sampling the random variable. There are also sampling errors, due to sampling only a finite number of realizations of my random variable. Is there literature relating to how to balance these two types of errors?

It is not hard to imagine a scenario where I can take more samples if I reduce how precise of a measurement device I use, and so a natural question is how do I decide what precision to use.

For instance, suppose that by rounding my measurements to the nearest centimeter rather than millimeter, I can increase the number of samples I can take by a factor of 5. Which precision should I use?

I am aware of Sheppard’s corrections, but I don’t think those are general enough for all cases; e.g. if my data is discrete. Moreover, even in the continuous case, Sheppard’s corrections say that the measurement errors do not affect the mean. This is reasonable if you’re using a relatively fine measurement system, but is clearly not true if your measurement precision is very low.

To clarify, I am considering the case where the rounding error is a *deterministic* function of my original random variable; i.e. assume that I sample in infinite precision, and then round to my measurement system (say the integers).