#StackBounty: #self-study #bayesian #normal-distribution #conjugate-prior Correlated belief update: Is this understanding of Bayesian p…

Bounty: 50

I am reading this paper Knowledge-Gradient Policy for Correlated Normal Beliefs for Rank and Selection Problem. The idea is as follows: We have $M$ distinct alternatives and samples from alternative $i$ are iid with $mathcal{N}(theta_i,lambda_i)$, where $theta_i$ is unknown and $lambda_i$ is the known variance. At every time step, we can sample from only one of the arms and at the end of $N$ timesteps, we pick the alternative that we think that has the highest mean reward.

More concretely using the notation and text in the paper,

Let $mathbf theta = (theta_1, dots, theta_M)’$ be column vector of unknown means. We initially assume our belief about $bf theta$ as:

begin{align}
bf theta sim mathcal{N}(mu^0,Sigma^0) label{1} tag{1}
end{align}

Consider a sequence of $N$ sampling decisions, $x^{0}, x^{1}, ldots, x^{N-1} .$ The measurement decision
$x^{n}$ selects an alternative to sample at time $n$ from the set ${1, ldots, M}$. The measurement error $varepsilon^{n+1} sim mathcal{N}left(0, lambda_{x^{n}}right)$ is independent conditionally on $x^{n}$, and the resulting sample observation is $hat{y}^{n+1}=theta_{x^{n}}+varepsilon^{n+1}$. Conditioned on $theta$ and $x^{n}$, the sample has conditional distribution $hat{y}^{n+1} sim mathcal{N}left(theta_{x^{n}}, lambda_{x^{n}}right)$. Note that our assumption that the errors $varepsilon^{1}, ldots, varepsilon^{N}$ are independent
differentiates our model from one that would be used for common random numbers. Instead,
we introduce correlation by allowing a non-diagonal covariance matrix $Sigma^{0}$.

We may think of $theta$ as having been chosen randomly at the initial time 0 , unknown to
the experimenter but according to the prior distribution (1), and then fixed for the duration
of the sampling sequence. Through sampling, the experimenter is given the opportunity to
better learn what value $theta$ has taken.

We define a filtration $left(mathcal{F}^{n}right)$ wherein $mathcal{F}^{n}$ is the sigma-algebra generated by the samples
observed by time $n$ and the identities of their originating alternatives. That is, $mathcal{F}^{n}$ is the sigma-algebra generated by $x^{0}, hat{y}^{1}, x^{1}, hat{y}^{2}, ldots, x^{n-1}, hat{y}^{n} .$ We write $mathbb{E}_{n}$ to indicate $mathbb{E}left[cdot mid mathcal{F}^{n}right]$,
the conditional expectation taken with respect to $mathcal{F}^{n}$, and then define $mu^{n}:=mathbb{E}_{n}[theta]$ and $Sigma^{n}:=operatorname{Cov}left[theta mid mathcal{F}^{n}right]$. Conditionally on $mathcal{F}^{n}$, our posterior predictive belief for $theta$ is multivariate normal with mean vector $mu^{n}$ and covariance matrix $Sigma^{n} .$

We can obtain the updates of $mu^{n}$ and $Sigma^{n}$ as functions of $mu^{n-1}, Sigma^{n-1}, hat{y}^{n}$, and $x^{n-1}$ as follows:

begin{align}
mu^{n+1} &=mu^{n}+frac{hat{y}^{n+1}-mu_{x}^{n}}{lambda_{x}+sum_{x x}^{n}} Sigma^{n} e_{x} tag{2} label{2}\
Sigma^{n+1} &=Sigma^{n}-frac{Sigma^{n} e_{x} e_{x}^{prime} Sigma^{n}}{lambda_{x}+Sigma_{x x}^{n}} tag{3} label{3}
text { where } e_{x} text { is a column } M text { -vector of } 0 text { s with a single } 1 text { at index } x
end{align}

My question:

The authors claim that in equation ref{2}, $hat{y}^{n+1}-mu_{x}^{n}$ when conditioned on $mathcal{F}^n$ has zero mean ; this claim seems wrong to me. My understanding is that $hat{y}^{n+1}$ still follows $mathcal{N}(theta^*_x,lambda_x)$ where $theta^*_x$ is some realisation sampled from $mathcal{N}(mu^0_x,Sigma^0_{xx})$ and this true $theta^{*}_x$ need not be the same as $mu^n_{x}$.

On the basis of this claim, the authors design an algorithm and prove some theoretical results. This is a widely cited paper and so, I think I am missing something here with respect to bayesian setting and posterior distributions.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.