#StackBounty: #monte-carlo Does the number of Required Monte Carlo simulations increase for including more input variables

Bounty: 50

This question describes a method to calculate the number of Monte Carlo simulation runs required. Another method checks the convergence of the mean of a particular output variable. Both of these methods focus on the output variables without regard to the number or variance of the input variables.

In general, will the number of Monte Carlo simulation runs need to increase if the number of varying input variables/parameters is also increased?

Or, does number of required runs only relate to increasing or decreasing variance on the input variables?


Get this bounty!!!

#StackBounty: #sampling #optimization #monte-carlo #minimum #numerical-integration Minimize asymptotic variance of fintely many estimates

Bounty: 50

Let

  • $(E,mathcal E,lambda)$ be a $sigma$-finite measure space;
  • $f:Eto[0,infty)^3$ be a bounded Bochner integrable function on $(E,mathcal E,lambda)$ and $p:=alpha_1f_1+alpha_2f_2+alpha_3f_3$ for some $alpha_1,alpha_2,alpha_3ge0$ with $alpha_1+alpha_2+alpha_3=1$ and $$c:=int p:{rm d}lambdain(0,infty)tag1$$
  • $mu:=plambda$
  • $Isubseteq$ be a finite nonempty set
  • $r_i$ be a probability density on $(E,mathcal E,lambda)$ with $$E_1:={p>0}subseteq{r_i>0}tag2$$ for $iin I$
  • $w_i:Eto[0,1]$ be $mathcal E$-measurable for $iin I$ with $${p>0}subseteqleft{sum_{iin I}w_i=1right}tag3$$

I’m running the Metropolis-Hastings algorithm with target distribution $mu$ and use the generated chain to estimate $lambda g$ for $mathcal E$-measurable $g:Eto[0,infty)^3$ with ${p=0}subseteq{g=0}$ and $lambda|g|<infty$. The asymptotic variance of my estimate is given by $$sigma^2(g)=sum_{iin I}(mu w_i)int_{E_1}frac{left|g-frac pclambda gright|^2}{r_i}:{rm d}lambda.tag4$$ Given a finite system $Hsubseteqmathcal E$ with $lambda(H)in(0,infty)$ for all $Hinmathcal H$, I want to choose the $w_i$ such that $$max_{Hinmathcal H}sigma^2(1_Hf)tag5$$ is as small as possible.

It’s easy to see that, for fixed $g$, $(3)$ is minimized by choosing $w_iequiv 1$ for the $iin I$ which minimizes $int_{E_1}frac{left|g-frac pclambda gright|^2}{r_i}:{rm d}lambda$ and $w_jequiv 0$ for all other $jin Isetminus{i}$.

So, my idea is to bound $(5)$ by something which doesn’t depend on $mathcal H$ anymore. Maybe we can bound it by the “variance” of $f$ using the fact that the $operatorname E[X]$ is minimizing $xmapstooperatorname Eleft[left|x-Xright|right]$.

I’m not sure if this is sensible, but maybe it’s easier to consider $sigma^2(g_H)$ instead of $sigma^2(1_Hf)$, where $g_H:=1_Hleft(f-frac{lambda(1_Hf)}{lambda(H)}right)$. By the aforementioned idea we obtain $$sigma^2(g_H)leint_{E_1}frac{|f|^2}{r_i}:{rm d}lambda-frac1{theta_i}left|int_{E_1}frac f{r_i}:{rm d}lambdaright|^2tag5,$$ where $theta_i=int_{E_1}frac1{r_i}:{rm d}lambda$. Now we could estimate the right-hand side of $(5)$ using Monte Carlo integration and compute the index $i$ for which this estimate is minimal. Is this a good idea or can we do better?


Get this bounty!!!

#StackBounty: #sampling #optimization #monte-carlo #minimum #numerical-integration Compute which of a finite number of integrals is min…

Bounty: 50

Let

  • $(E,mathcal E,lambda)$ be a $sigma$-finite measure space;
  • $f:Eto[0,infty)^3$ be a bounded Bochner integrable function on $(E,mathcal E,lambda)$ and $p:=alpha_1f_1+alpha_2f_2+alpha_3f_3$ for some $alpha_1,alpha_2,alpha_3ge0$ with $alpha_1+alpha_2+alpha_3=1$ and $$int p:{rm d}lambdain(0,infty)tag1$$
  • $I$ be a finite nonempty set
  • $r:(Itimes E)times E$ be the density of a Markov kernel with source $(Itimes E,2^Iotimesmathcal E)$ and target $(E,mathcal E)$ with $$E_1:={p>0}subseteq{r((i,x),;cdot;)>0};;;text{for all }iin Itag2$$

Fix $xin E$. I want to find the index $iin I$ minimizing $$sigma_i:=lambda_ileft|f-frac{lambda_if}{lambda_i(E_1)}right|^2=lambda_i(E_1)operatorname E_ileft[left|f-operatorname E_i[f]right|^2right],$$ where $$lambda_i:=frac1{r((i,x),;cdot;)}left.lambdaright|_{E_1}$$ and $operatorname E_i$ is the expectation wrt $$operatorname P_i:=frac{lambda_i}{lambda_i(E_1)}$$ for $iin I$.

I’m not interested in the value $sigma_i$ itself.

Currently I’m estimating each $sigma_i$ using Monte Carlo integration and then compute the minimum. However, this is extremely slow.

I’m not familiar with this kind of problem and hence this might be nonsensical, but note that if $Y_i$ is an $(E,mathcal E)$-valued random variable with $Y_isim r((i,x),;cdot;)lambda$, then begin{equation}begin{split}&sigma_i=operatorname Eleft[1_{E_1}(Y_i)frac{|f(Y_i)|^2}{left|r((i,x),Y_i)right|^2}right]\&;;;;;;;;;;;;-left(operatorname Eleft[1_{E_1}(Y_i)frac1{left|r((i,x),Y_i)right|^2}right]right)^{-1}left|operatorname Eleft[1_{E_1}(Y_i)frac{f(Y_i)}{left|r((i,x),Y_i)right|^2}right]right|^2end{split}tag3end{equation} for all $iin I$. So, we might approximate $sigma_i$ by an independent identically distributed process $left(Y_i^{(n)}right){ninmathbb N}$ with $Y_i^{(1)}sim r((i,x),;cdot;)lambda$ via $$A_i^{(n)}-frac1{B_i^{(n)}}left|C_i^{(n)}right|^2xrightarrow{ntoinfty}sigma_i;;;text{almost surely},tag4$$ where begin{align}A_i^{(n)}:=frac1nsum{i=1}^nfrac{1_{E_1}f}{left|r((i,x),;cdot;)right|^2}left(Y_i^{(n)}right),\B_i^{(n)}:=frac1nsum_{i=1}^nfrac{1_{E_1}}{left|r((i,x),;cdot;)right|^2}left(Y_i^{(n)}right),\C_i^{(n)}:=frac1nsum_{i=1}^nleft|frac{1_{E_1}f}{r((i,x),;cdot;)}right|^2left(Y_i^{(n)}right)end{align} for $ninmathbb N$, for all $iin I$.

Currently I’m using $(4)$ to estimate $sigma_i$ and then compute the minimum among the estimates. But this is extreme slowly and might even be erroneous.


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued ergodic time-homogeneous Markov chain with stationary distribution $mu$ and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?

Background:

My actual concern is that I need to estimate $lambda(hf)$ for a variety of $mathcal E$-measurable $h:Eto[0,infty)$, where $lambda$ is a reference measure on $(E,mathcal E)$ so that $mu$ has a density $p$ with respect to $lambda$ and ${p=0}subseteq{f=0}$. Now the support $E_0:={p>0}cap{h>0}$ is very “small” and you may assume that $h=1_{{:h:>:0:}}$. I want to estimate $lambda(hf)$ using the importance sampling estimator.

Say $(X_n){ninmathbb N_0}$ is the chain generated by the Metropolis-Hastings algorithm with target distribution $mu$, $(Y_n){ninmathbb N}$ is the corresponding proposal sequence and $Z_n:=(X_{n-1},Y_n)$ for $ninmathbb N$. If the proposal kernel is denoted by $Q$, we know that $(Z_n)_{ninmathbb N}$ has stationary distribution $nu:=muotimes Q$.

Now, in light of the special form of the to be estimated integral $lambda(hf)$, I thought it might be tempting to consider the Markov chain corresponding to the successive times of its returns to the set $Etimes E_0$. To be precise, let $tau_0:=0$ and $$tau_k:=infleft{n>tau_{k-1}:Y_nin E_0right};;;text{for }kinmathbb N.$$ Assuming that $Etimes E_0$ is a recurrent set for $(Z_n)_{ninmathbb N}$, we know that $left(Z_{tau_k}right)_{kinmathbb N}$ is again an ergodic time-homogeneous Markov chain with stationary distribution $$nu_0:=frac{left.nuright|_{Etimes E_0}}{nu(Etimes E_0)}.$$

I wondered whether it makes sense to build an estimator using $left(Z_{tau_k}right){kinmathbb N}$ instead of $(Z_n){ninmathbb N}$ or if this would gain nothing.


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!

#StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to $int_Bf:{rm }dmu$ a…

Bounty: 50

Let $(E,mathcal E,mu)$ be a probability space, $(X_n){ninmathbb N_0}$ be an $(E,mathcal E)$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum{i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.$$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $finmathcal L^1(mu)$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $B:={fne0}$ is “small”, but $mu(B)>0$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $sigma^2(f)$ we could consider $sigma^2(f_B)$ which, by definition, should tell us something about the deviation of $f=1_Bf$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $mu f$, does it make sense to consider $sigma^2(f_B)$ instead of $sigma^2(f)$? How are $sigma^2(f_B)$ and $sigma^2(f)$ related?


Get this bounty!!!