#StackBounty: #bayesian #estimation Estimating the mean with previous knowledge

Bounty: 100

I have an unknown (discrete) probability distribution $p={p_s}$, where $p_s$ is the probability of observing configuration $s$. To each configuration is associated an energy that I can compute $E_s$.

If I want to estimate the mean

$$
langle E rangle_p := sum_s E_s p_s (1)
$$

by drawing $N$ samples, I will clearly use the sample mean estimator:

$$
M_E := frac{1}{N} sum_j E_j (2)
$$

where $E_j$ is the energy that I obtained at the $j-$th draw.

At this point something happens and my distribution $p_s$ is changed just a bit, but in an unknown way, i.e., the new distribution is

$$
p_s to p’_s = p_s +delta p_s.
$$

The energy of each configuration is unchanged.

Now I would like to estimate the new average

$$
langle E rangle_{p’} := sum_s E_s p’_s
$$

Is there a way to do that, that takes into account that I have some knowledge of $p’$ [namely, I already estimated (1) via (2)]? The goal is to minimize the number of samples $N$ that need to be taken.

EDIT

Let me add something as to why this may not be hopeless.
On the one hand, once my distribution changed, without taking any additional sample I can simply guess the average with the previous estimation. The question is essentially if I can do better than that?

On the other hand I can assume that my perturbation is of order $epsilon$, can I obtain an estimation of the new mean up to the same order (at least approximately)?

I’d be interested in any reference or even a no-go theorem or no-go argument.

EDIT 2

I was hoping that something like Kalman filtering or Bayesian inference could do the trick but I know too little in that field.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.