I have an expensive model (or class of models). My baseline approach to quantify uncertainty re the model parameters are hessian based standard errors, and I use k-fold cross validation for model comparison / validation. While a full bootstrap would be pleasant as a more robust uncertainty quantification, this is quite expensive. I think I should also be able to develop expectations for the variance of the leave-k out estimates, to at least get a rough sense of where the hessian based standard error estimates are not performing well. I wonder if someone knows how to do this, or can point to work that does this? Something like an approximate jackknife?