I would like to know how to implement t-test for lag composite anomalies.
Suppose I have the following 20 points:
dput(dat) c(-0.560475646552213, -0.23017748948328, 1.55870831414912, 0.070508391424576, 0.129287735160946, 1.71506498688328, 0.460916205989202, -1.26506123460653, -0.686852851893526, -0.445661970099958, 1.22408179743946, 0.359813827057364, 0.400771450594052, 0.11068271594512, -0.555841134754075, 1.78691313680308, 0.497850478229239, -1.96661715662964, 0.701355901563686, -0.472791407727934
These 20 data points are anomalies of winds (relative to the 1981-2010 climatologies) from DIFFERENT DATES that exceeded a certain threshold. For example, the first point is for January 2, while the second point is for March 5.
I will use the mean of these anomalies as Lag 0.
 I would like to ask how should I implement t-test for testing the significance of the composite mean?
I was thinking of bootstrapping these 20 data points by creating 1000 randomly selected dates with replacement and comparing the mean of the original 20 points with the mean of the 1000 bootstrapped data set. Using a two-tailed t-test, if the mean of the original 20 points exceeded at least 975 out of the 1000 bootstrapped data, then it is significant at the 95% confidence level.
 I am not sure if this is statistically correct? I would like to ask for an opinion.
 If the above procedure is correct, any suggestion on how I can implement this in R?
I’ll appreciate any help.