#StackBounty: #self-study #variance #sampling #mean Relationship between variance of mean and mean variance

Bounty: 50

In ranked set sampling, we select $n$ random sets, each of size $n$. Then we choose the largest unit from the 1st set, 2nd largest from the 2nd set, and thus $n$th largest from the $n$th set. This sampling procedure was first introduced by McIntyre (1952). The reference is A method for unbiased selective sampling, using ranked sets. Australian Journal of Agricultural Research, 3(4), 385-390. In the Method section (page 2) of this paper, it is written that

The variance of the mean of five quadrats one from each subdistribution is one-
fifth of the mean variance of these distributions. This may be contrasted with the variance of the mean of five random samples, that is, one-fifth of the variance of the parent population.

Can anyone please illustrate how does the variance of the mean of five quadrats one from each subdistribution equal one-fifth of the mean variance of these distributions?

And also, what does this sentence "This may be contrasted with the variance of the mean of five random samples, that is, one-fifth of the variance of the parent population." mean?


Get this bounty!!!

#StackBounty: #bayesian #variance #random-forest Using MCMC to generate a synthetic training set

Bounty: 50

I have a specific question about an important point made in [http://arxiv.org/pdf/1507.06173.pdf, Sec. 4]. To summarize, let’s consider a signal model

$overrightarrow{R} sim p(overrightarrow{R} vert t)$,

where $overrightarrow{R} in mathbb{R}^n$ collects some noisy sensor measurements and is modeled as a multivariate Gaussian random variable with mean

$mathbb{E}[overrightarrow{R} vert t] = overrightarrow{mu}(t)$

and variance

$mathbb{V}[overrightarrow{R} vert t] = Sigma(overrightarrow{mu}(t))$.

The elements of $overrightarrow{mu}(t)$ and $Sigma(overrightarrow{mu})$ are (known) nonlinear functions of $t$. The goal is to find an estimator for the unknown $t$. The authors propose to use the MCMC algorithm to estimate the posterior $p(t vert overrightarrow{R})$ given some prior $p(t)$ and then compute the Bayesian mean

$displaystylehat{t}=int t, p(t vert overrightarrow{R}), dt$

Since running MCMC would be too slow in a real time image processing application, they build a training set and then use a random forest classifier to make predictions as follows.
First, a value $t_i$ is sampled from the prior, then $overrightarrow{R}_i$ is sampled from the conditional distribution (likelihood):

$t_i sim p(t)$,

$overrightarrow{R}_i sim p(overrightarrow{R} vert t_i)$.

Now the Bayesian mean is computed from the posterior:

$displaystylehat{t}_i=int t, p(t vert overrightarrow{R}_i), dt$

This process is repeated to build the training set

$(overrightarrow{R}_i, displaystylehat{t}_i)qquad i=0,ldots,N$.

Now to my question: one could as well use the training set

$(overrightarrow{R}_i, t_i)$

where $t_i$ is the value sampled from the prior, therefore avoiding to run MCMC. According to the authors this would increase the variance of the output of the random forest regression algorithm. Is there a formal way to prove that? In other words, how can I estimate the variance of the output produced by the regression algorithms obtained using the two different training sets?


Get this bounty!!!

#StackBounty: #regression #machine-learning #variance #cross-validation #predictive-models Does $K$-fold CV with $K=N$ (LOO) provide th…

Bounty: 50

TL,DR: It appears that, contrary to oft-repeated advice, leave-one-out cross validation (LOO-CV) — that is, $K$-fold CV with $K$ (the number of folds) equal to $N$ (the number of training observations) — yields estimates of the generalization error that are the least variable for any $K$, not the most variable, assuming a certain stability condition on either the model/algorithm, the dataset, or both (I’m not sure which is correct as I don’t really understand this stability condition).

  • Can someone clearly explain what exactly this stability condition is?
  • Is it true that linear regression is one such “stable” algorithm, implying that in that context, LOO-CV is strictly the best choice of CV as far as bias and variance of the estimates of generalization error are concerned?

The conventional wisdom is that the choice of $K$ in $K$-fold CV follows a bias-variance tradeoff, such lower values of $K$ (approaching 2) lead to estimates of the generalization error that have more pessimistic bias, but lower variance, while higher values of $K$ (approaching $N$) lead to estimates that are less biased, but with greater variance. The conventional explanation for this phenomenon of variance increasing with $K$ is given perhaps most prominently in The Elements of Statistical Learning (Section 7.10.1):

With K=N, the cross-validation estimator is approximately unbiased for the true (expected) prediction error, but can have high variance because the N “training sets” are so similar to one another.

The implication being that the $N$ validation errors are more highly correlated so that their sum is more variable. This line of reasoning has been repeated in many answers on this site (e.g., here, here, here, here, here, here, and here) as well as on various blogs and etc. But a detailed analysis is virtually never given, instead only an intuition or brief sketch of what an analysis might look like.

One can however find contradictory statements, usually citing a certain “stability” condition that I don’t really understand. For example, this contradictory answer quotes a couple paragraphs from a 2015 paper which says, among other things, “For models/modeling procedures with low instability, LOO often has the smallest variability” (emphasis added). This paper (section 5.2) seems to agree that LOO represents the least variable choice of $K$ as long as the model/algorithm is “stable.” Taking even another stance on the issue, there is also this paper (Corollary 2), which says “The variance of $k$ fold cross validation […] does not depend on $k$,” again citing a certain “stability” condition.

The explanation about why LOO might be the most variable $K$-fold CV is intuitive enough, but there is a counter-intuition. The final CV estimate of the mean squared error (MSE) is the mean of the MSE estimates in each fold. So as $K$ increases up to $N$, the CV estimate is the mean of an increasing number of random variables. And we know that the variance of a mean decreases with the number of variables being averaged over. So in order for LOO to be the most variable $K$-fold CV, it would have to be true that the increase in variance due to the increased correlation among the MSE estimates outweighs the decrease in variance due to the greater number of folds being averaged over. And it is not at all obvious that this is true.

Having become thoroughly confused thinking about all this, I decided to run a little simulation for the linear regression case. I simulated 10,000 datasets with $N$=50 and 3 uncorrelated predictors, each time estimating the generalization error using $K$-fold CV with $K$=2, 5, 10, or 50=$N$. The R code is here. Here are the resulting means and variances of the CV estimates across all 10,000 datasets (in MSE units):

         k = 2 k = 5 k = 10 k = n = 50
mean     1.187 1.108  1.094      1.087
variance 0.094 0.058  0.053      0.051

These results show the expected pattern that higher values of $K$ lead to a less pessimistic bias, but also appear to confirm that the variance of the CV estimates is lowest, not highest, in the LOO case.

So it appears that linear regression is one of the “stable” cases mentioned in the papers above, where increasing $K$ is associated with decreasing rather than increasing variance in the CV estimates. But what I still don’t understand is:

  • What precisely is this “stability” condition? Does it apply to models/algorithms, datasets, or both to some extent?
  • Is there an intuitive way to think about this stability?
  • What are other examples of stable and unstable models/algorithms or datasets?
  • Is it relatively safe to assume that most models/algorithms or datasets are “stable” and therefore that $K$ should generally be chosen as high as is computationally feasible?


Get this bounty!!!

#StackBounty: #variance #average "Averaging" variances

Bounty: 50

I need to obtain some sort of “average” among a list of variances, but have trouble coming up with a reasonable solution. There is an interesting discussion about the differences among the three Pythagorean means (arithmetic, geometric, and harmonic) in this thread; however, I still don’t feel any of them would be a good candidate. Any suggestions?

P.S. Some context – These variances are sample variances from $n$ subjects, each of whom went through the same experiment design with roughly the same sample size $k$. In other words, there are $n$ sampling variances $sigma_1^2$, $sigma_2^2$, …, $sigma_n^2$, corresponding to those $n$ subjects. A meta analysis has been already performed at the population level. The reason I need to obtain some kind of “average” or “summarized” sample variance is that I want to use it to calculate an index such as ICC after the meta analysis.


Get this bounty!!!

#HackerRank: Correlation and Regression Lines solutions

import numpy as np
import scipy as sp
from scipy.stats import norm

Correlation and Regression Lines – A Quick Recap #1

Here are the test scores of 10 students in physics and history:

Physics Scores 15 12 8 8 7 7 7 6 5 3

History Scores 10 25 17 11 13 17 20 13 9 15

Compute Karl Pearson’s coefficient of correlation between these scores. Compute the answer correct to three decimal places.

Output Format

In the text box, enter the floating point/decimal value required. Do not leave any leading or trailing spaces. Your answer may look like: 0.255

This is NOT the actual answer – just the format in which you should provide your answer.

physicsScores=[15, 12,  8,  8,  7,  7,  7,  6, 5,  3]
historyScores=[10, 25, 17, 11, 13, 17, 20, 13, 9, 15]
print(np.corrcoef(historyScores,physicsScores)[0][1])
0.144998154581

Correlation and Regression Lines – A Quick Recap #2

Here are the test scores of 10 students in physics and history:

Physics Scores 15 12 8 8 7 7 7 6 5 3

History Scores 10 25 17 11 13 17 20 13 9 15

Compute the slope of the line of regression obtained while treating Physics as the independent variable. Compute the answer correct to three decimal places.

Output Format

In the text box, enter the floating point/decimal value required. Do not leave any leading or trailing spaces. Your answer may look like: 0.255

This is NOT the actual answer – just the format in which you should provide your answer.

sp.stats.linregress(physicsScores,historyScores).slope
0.20833333333333331

Correlation and Regression Lines – A quick recap #3

Here are the test scores of 10 students in physics and history:

Physics Scores 15 12 8 8 7 7 7 6 5 3

History Scores 10 25 17 11 13 17 20 13 9 15

When a student scores 10 in Physics, what is his probable score in History? Compute the answer correct to one decimal place.

Output Format

In the text box, enter the floating point/decimal value required. Do not leave any leading or trailing spaces. Your answer may look like: 0.255

This is NOT the actual answer – just the format in which you should provide your answer.

def predict(pi,x,y):
    slope, intercept, rvalue, pvalue, stderr=sp.stats.linregress(x,y);
    return slope*pi+ intercept

predict(10,physicsScores,historyScores)
15.458333333333332

Correlation and Regression Lines – A Quick Recap #4

The two regression lines of a bivariate distribution are:

4x – 5y + 33 = 0 (line of y on x)

20x – 9y – 107 = 0 (line of x on y).

Estimate the value of x when y = 7. Compute the correct answer to one decimal place.

Output Format

In the text box, enter the floating point/decimal value required. Do not lead any leading or trailing spaces. Your answer may look like: 7.2

This is NOT the actual answer – just the format in which you should provide your answer.

x=[i for i in range(0,20)]

'''
    4x - 5y + 33 = 0
    x = ( 5y - 33 ) / 4
    y = ( 4x + 33 ) / 5
    
    20x - 9y - 107 = 0
    x = (9y + 107)/20
    y = (20x - 107)/9
'''
t=7
print( ( 9 * t + 107 ) / 20 )
8.5

Correlation and Regression Lines – A Quick Recap #5

The two regression lines of a bivariate distribution are:

4x – 5y + 33 = 0 (line of y on x)

20x – 9y – 107 = 0 (line of x on y).

find the variance of y when σx= 3.

Compute the correct answer to one decimal place.

Output Format

In the text box, enter the floating point/decimal value required. Do not lead any leading or trailing spaces. Your answer may look like: 7.2

This is NOT the actual answer – just the format in which you should provide your answer.

http://www.mpkeshari.com/2011/01/19/lines-of-regression/

Q.3. If the two regression lines of a bivariate distribution are 4x – 5y + 33 = 0 and 20x – 9y – 107 = 0,

  • calculate the arithmetic means of x and y respectively.
  • estimate the value of x when y = 7. – find the variance of y when σx = 3.
Solution : –

We have,

4x – 5y + 33 = 0 => y = 4x/5 + 33/5 ————— (i)

And

20x – 9y – 107 = 0 => x = 9y/20 + 107/20 ————- (ii)

(i) Solving (i) and (ii) we get, mean of x = 13 and mean of y = 17.[Ans.]

(ii) Second line is line of x on y

x = (9/20) × 7 + (107/20) = 170/20 = 8.5 [Ans.]

(iii) byx = r(σy/σx) => 4/5 = 0.6 × σy/3 [r = √(byx.bxy) = √{(4/5)(9/20)]= 0.6 => σy = (4/5)(3/0.6) = 4 [Ans.]

variance= σ**2=> 16

#StackBounty: #hypothesis-testing #variance #heteroscedasticity #breusch-pagan Test of heteroscedasticity for a categorical/ordinal pre…

Bounty: 100

I have different number of measurements from various classes. I used one-way anova to see if the means of the observations in each class is different from others. This used the ratio of the between-class variance to the total variance.

Now, I want to test whether some classes (basically those with more observations) have a larger variance than expected by chance. What statistical test should I do? I can calculate the sample variance for each class, and then find the $R^2$ and p-value for the correlation of the sample variance vs. class size. Or in R, I could do

summary(lm(sampleVar ~ classSize))

But the variance of the esitmator of variance (sample variance) depends on the sample size, even for random data.

For example, I generate some random data:

dt <- as.data.table(data.frame(obs=rnorm(4000), clabel=as.factor(sample(x = c(1:200),size = 4000, replace = T, prob = 5+c(1:200)))))

I compute the sample variance and class sizes

dt[,classSize := length(obs),by=clabel]; dt[,sampleVar := var(obs),by=clabel]

and then test to see if variance depends on the class size

summary(lm(data=unique(dt[,.(sampleVar, classSize),by=clabel]),formula = sampleVar ~ classSize))
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 0.858047   0.056605  15.159   <2e-16 ***
classSize   0.006035   0.002393   2.521   0.0125 *  

There seems to be a dependence of the variance with the class size, but this is simply because the variance of the estimator depends on the sample size. How do I construct a statistical test to see if the variances in the different classes are actually dependent on the class sizes?

If my the variable I was regressing against was a continuous variable instead of the ordinal variable classSize, then I could have used the Breusch-Pagan test.

For example, I could do
fit <- lm(data=dt, formula= obs ~ clabel)


Get this bounty!!!

#StackBounty: #variance #ancova Find variables most responsible for variance between groups

Bounty: 100

I have a set of data with continuous features $x_1, x_2,…,x_n$, as well as a continuous $y$ which is some complicated, unknown function of the $x_i$. Each data point, furthermore, has a discrete label (category). I want to somehow quantify which variables $x_i$ are most responsible for the variance of $y$ between the groups.

Below is a simple example. The blue and red dots are in different categories. Clearly most of the variation in $y$ between the two categories is due to $x_2$.

enter image description here

Are there any statistical methods that I can use for this?


Get this bounty!!!