#StackBounty: #bayesian #uncertainty #high-dimensional #variational-bayes Uncertainty estimation in high-dimensional inference problems…

Bounty: 100

I’m working on a high-dimensional inference problem (around 2000 model parameters) for which we are able to robustly perform MAP estimation by finding the global maximum of the log-posterior using a combination of gradient-based optimisation and a genetic algorithm.

I’d very much like to be able to make some estimate of the uncertainties on the model parameters in addition to finding the MAP estimate.

We are able to efficiently calculate the gradient of the log-posterior with respect to the parameters, so long-term we’re aiming to use Hamiltonian MCMC to do some sampling, but for now I’m interested in non-sampling based estimates.

The only approach I know of is to calculate the inverse of the Hessian at the mode to approximate the posterior as multivariate normal, but even this seems infeasible for such a large system, since even if we calculate the $sim 4times10^{6}$ elements of the Hessian I’m sure we couldn’t find its inverse.

Can anyone suggest what kind of approaches are typically used in cases like this?

Thanks!


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.