## #StackBounty: #normal-distribution #bootstrap #central-limit-theorem Distribution of bootstrap and central limit theorem

### Bounty: 50

Let’s take a simple example: we have 100000 observations, and we want to estimate the mean.

In theory, the distribution of the estimator is a normal distribution according to the Central limit theorem.

We can also use bootstraping to estimate the distribution of the mean estimation: we resample lots of times, then we get a distribution.

Now, my question is: is the normal distribution a good approximation for the bootstrap distribution?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued ergodic time-homogeneous Markov chain with stationary distribution $$mu$$ and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

## Background:

My actual concern is that I need to estimate $$lambda(hf)$$ for a variety of $$mathcal E$$-measurable $$h:Eto[0,infty)$$, where $$lambda$$ is a reference measure on $$(E,mathcal E)$$ so that $$mu$$ has a density $$p$$ with respect to $$lambda$$ and $${p=0}subseteq{f=0}$$. Now the support $$E_0:={p>0}cap{h>0}$$ is very “small” and you may assume that $$h=1_{{:h:>:0:}}$$. I want to estimate $$lambda(hf)$$ using the importance sampling estimator.

Say $$(X_n){ninmathbb N_0}$$ is the chain generated by the Metropolis-Hastings algorithm with target distribution $$mu$$, $$(Y_n)$${ninmathbb N}\$ is the corresponding proposal sequence and $$Z_n:=(X_{n-1},Y_n)$$ for $$ninmathbb N$$. If the proposal kernel is denoted by $$Q$$, we know that $$(Z_n)_{ninmathbb N}$$ has stationary distribution $$nu:=muotimes Q$$.

Now, in light of the special form of the to be estimated integral $$lambda(hf)$$, I thought it might be tempting to consider the Markov chain corresponding to the successive times of its returns to the set $$Etimes E_0$$. To be precise, let $$tau_0:=0$$ and $$tau_k:=infleft{n>tau_{k-1}:Y_nin E_0right};;;text{for }kinmathbb N.$$ Assuming that $$Etimes E_0$$ is a recurrent set for $$(Z_n)_{ninmathbb N}$$, we know that $$left(Z_{tau_k}right)_{kinmathbb N}$$ is again an ergodic time-homogeneous Markov chain with stationary distribution $$nu_0:=frac{left.nuright|_{Etimes E_0}}{nu(Etimes E_0)}.$$

I wondered whether it makes sense to build an estimator using $$left(Z_{tau_k}right){kinmathbb N}$$ instead of $$(Z_n)$${ninmathbb N}\$ or if this would gain nothing.

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!

## #StackBounty: #variance #monte-carlo #asymptotics #central-limit-theorem How are the variance of an estimate to \$int_Bf:{rm }dmu\$ a…

### Bounty: 50

Let $$(E,mathcal E,mu)$$ be a probability space, $$(X_n){ninmathbb N_0}$$ be an $$(E,mathcal E)$$-valued stationary time-homogeneous Markov chain with initial distribution and $$A_nf:=frac1nsum$${i=0}^{n-1}f(X_i);;;text{for }finmathcal L^1(mu)text{ and }ninmathbb N.\$\$ Suppose we’re estimating an integral $$mu f=int f:{rm d}mu$$ for some $$finmathcal L^1(mu)$$ and that $$sqrt n(A_nf-mu f)xrightarrow{ntoinfty}mathcal N(0,sigma^2(f))tag1$$ with $$sigma^2(f)=lim_{ntoinfty}noperatorname{Var}[A_nf].$$

Assume that the support $$B:={fne0}$$ is “small”, but $$mu(B)>0$$. Now let $$f_B:=1_Bleft(f-frac{mu(1_Bf)}{mu(B)}right).$$ Instead of $$sigma^2(f)$$ we could consider $$sigma^2(f_B)$$ which, by definition, should tell us something about the deviation of $$f=1_Bf$$ from its mean.

If our concern is the minimization of the asymptotic variance of our estimation of $$mu f$$, does it make sense to consider $$sigma^2(f_B)$$ instead of $$sigma^2(f)$$? How are $$sigma^2(f_B)$$ and $$sigma^2(f)$$ related?

Get this bounty!!!