#StackBounty: #mcmc #metropolis-hastings #marginal #particle-filter #sequential-monte-carlo Using the pseudo marginal approach for esti…

Bounty: 50

I have $K>0$ iid unknown Markov chains ${X_n^k : n in mathbb{N}}, k=1, dots, K$ on a discrete state space $S_X = {1,2,3}$, each chain runs and gives rise to observations of the form ${Y_n^k : n in mathbb{N}}, k=1, dots, K$ on a discrete state space $S_Y = {1,2}$.

Given a static parameter $theta in mathbb{R}$, completely independent of $K$ (also static), the transition probability matrix $P_{theta}:= P_{i,j} = mathbb{P}(X_n=j|X_{n-1}=i)$ and emission matrix $B_{theta}:= B_{i,j} = mathbb{P}(Y_n = j|X_n=i)$ of each chain is known.

At each time point I observe $z_n = sum_{k=1}^K y_n^k$. Over a time series of length $N$, I am able to estimate the marginal log-likelihood $log hat{p}(z_1, dots, z_N|theta,K)$ given any $theta$ and $K$ using the bootstrap particle filter with $N_{p,k}$ particles for any given $k$.

I would like to sample from the posterior $p(theta,K|z_1,dots,z_N)$.
Now, I know that I can use a pseudo-marginal MCMC approach to sample from the posterior $p(theta|z_1,dots,z_N,K)$ using a particle estimate of the marginal log-likelihood. At the moment, I am also using this chain to sample from $K$ (using a random walk on the integers), and it seems to be recovering $K$. However, I am unsure as to whether or not this part needs a reversible jump element. The state space of my target distribution is $mathbb{R} times mathbb{N}$ and does therefore not vary in dimension, however, since the particle filter requires $k times N_{p,k}$ samples of Markov chains at each iteration, I am still unsure whether the algorithm itself reaches the correct stationary distribution in this way.

Any justification of a reversible jump element or clarification of my current method would be very highly appreciated. Thanks!

Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.