#StackBounty: #regression #hypothesis-testing #statistical-significance #cross-validation How can I use k-fold cross-validation to dete…

Bounty: 100

I have an experiment in which I present a subject with $n$ inputs, $pmb{x} in mathbb{R}^N$. For each input, a response is produced in ~25,000 separate output variables – so for a given output variable $Y_i$, $Y_i in mathbb{R}^N$.

For each $Y_i, i in [0, 25000]$ and a function $f$ that maps inputs to features, I need to determine whether a linear regression model can be used to predict $Y_i$ given $f(pmb{x})$, and if so, calculate the accuracy of this prediction. Prediction accuracy is defined by Pearson’s r between predicted output $hat{Y}_i$ and true output $Y_i$.

The method given in a paper for this is:

For each $Y_i$:

  1. Split $pmb{x}$ into $pmb{x}_{train}$ and $pmb{x}_{test}$.
  2. Use k-fold cross validation on $pmb{x}_{train}$ to determine whether the linear regression model predicts $Y_i$ from $f(pmb{x}_{train})$ significantly better than chance, using a p threshold of 0.01 / 25000 = 4e-6 (to correct for the number of output variables).
  3. If the linear regression model was found to predict better than chance, then calculate the prediction accuracy by training on the entire training set and evaluating on the test set.

My issue is with the details of step 2. I understand k-fold cross validation but I don’t know what test I should be using to determine whether the prediction is better than chance from the results of the cross validation. The paper’s exact wording is: “Student’s t test across cross-validated [input]”, but I don’t know exactly what that means here.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.