#StackBounty: #r #confidence-interval Confidence interval for the change of a GAM over a period

Bounty: 50

Similar to Confidence interval for the slope of a GAM, I am fitting a number of gam models to time series data and want to estimate the change (and its uncertainty) over an interval of time (e.g. from x = 15 to 20). I can estimate the instantaneous slope using the derivative() function of the gratis package. However I was wondering about estimating the difference in the predicted model between two points in time (e.g. five years apart). Can I just use a z-test? What about handling correlation?

library(gratia) # https://github.com/gavinsimpson/gratia/
library(mgcv)
library(ggplot2)
library(dplyr)

set.seed(123)

temp3 <- data.frame(z = seq(0, 20, 1/12)) %>%
  mutate(
    x = rlnorm(n(), 1, 1),
    y = 1 + sin(z*2*pi/20) + x/10 + rnorm(n(), 0, 1)
    )

# ggplot() + geom_point(data = temp3, mapping = aes(x = z, y = y))

mod1 <- gam(y ~ s(z), data = temp3, method = "REML") # on time omly
summary(mod1)
# appraise(mod1)
# draw(mod1)
pred1 <- predict(mod1, newdata = temp3, se.fit = TRUE)
resid1 <- mod1$residuals
shap1 <- format(shapiro.test(resid1)$p.value, digits = 2) %>% print()

# mod2 <- gam(y ~ te(z, x), data = temp3, method = "REML") # on time and flow
# mod2 <- gam(y ~ ti(z) + ti(x) + ti(z,x), data = temp3, method = "REML") # on time and flow
mod2 <- gam(y ~ s(z) + s(x), data = temp3, method = "REML") # on time and flow
summary(mod2)
# appraise(mod2)
# draw(mod2)
pred2 <- predict(mod2, newdata = temp3 %>% mutate(x = median(x, na.rm = TRUE)), se = TRUE) # remove flow effect
resid2 <- mod2$residuals
shap2 <- format(shapiro.test(resid2)$p.value, digits = 2) %>% print()

AIC(mod1, mod2)

temp4 <- temp3 %>%
  mutate(
    model1 = pred1$fit,
lower1 = pred1$fit - 1.96 * pred1$se.fit, # 95% C.I.
upper1 = pred1$fit + 1.96 * pred1$se.fit,
model2 = pred2$fit,
    lower2 = pred2$fit - 1.96 * pred2$se.fit, # 95% C.I.
    upper2 = pred2$fit + 1.96 * pred2$se.fit
  )

ggplot(temp4) +
  theme_bw() +
  labs(x = "Year", y = "Concentratoin", colour = "Legend", fill = "") +
  geom_point(mapping = aes(x = z, y = y, colour = "data")) +
  geom_path(mapping = aes(x = z, y = model1, colour = "trend"), size = 2, alpha = 1) +
  geom_ribbon(mapping = aes(x = z, ymin = lower1, ymax = upper1, fill = "trend"), alpha = 0.3) +
  geom_path(mapping = aes(x = z, y = model2, colour = "trendadj"), size = 2, alpha = 1) +
  geom_ribbon(mapping = aes(x = z, ymin = lower2, ymax = upper2, fill = "trendadj"), alpha = 0.3) +
  scale_colour_brewer(palette = "Set1", aesthetics = c("colour", "fill"))

Created on 2021-03-19 by the reprex package (v1.0.0)


Get this bounty!!!

#StackBounty: #r #anova #confidence-interval #post-hoc #tukey-hsd-test One-way ANOVA followed by Tukey test – confidence intervals

Bounty: 50

I want to test 3 types of treatments (T1, T2, T3) along with a control/placebo treatment (T4). In order to do so, I’m performing One-Way ANOVA to my data and then proceeding to the Tukey test.

     # generating artificial data for this example
     y <- c(rnorm(10,21,1),rnorm(10,17,2),rnorm(10,15,2),rnorm(10,18,3)) 
     Type <- rep(c("T1","T2","T3","T4"),each=10)  # Variável explicativa (categórica)
     data <- data.frame(y,Type)
     data 

     > data
                  y    Type
           1  22.26349   T1
           2  18.95373   T1
           3  20.53918   T1
           4  19.12539   T1
           5  20.21258   T1
           6  19.66778   T1
           7  20.51515   T1
           8  19.66407   T1
           9  21.76689   T1
           10 21.62950   T1
           11 18.59258   T2
           12 21.16842   T2
           13 20.14588   T2
           14 18.36684   T2
           15 16.49939   T2
           16 14.71304   T2
           17 16.53829   T2
           18 15.56384   T2
           19 17.95939   T2
           20 16.62988   T2
           21 16.14444   T3
           22 14.31957   T3
           23 16.42375   T3
           24 14.37554   T3
           25 14.07891   T3
           26 14.48411   T3
           27 13.86916   T3
           28 11.20102   T3
           29 15.65992   T3
           30 14.85290   T3
           31 21.96175   T4
           32 24.26759   T4
           33 16.52458   T4
           34 17.19564   T4
           35 17.63090   T4
           36 17.81682   T4
           37 19.63599   T4
           38 21.06390   T4
           39 12.96356   T4
           40 20.75986   T4

From the summary of my ANOVA model I can see that the p-value is 3.059e-06 < 0.05, which indicates that we can reject the null hypothesis that all means across all groups are equal, i.e, there is a significant difference among group means:

         anova.model <- lm(y ~ Type, data= data)
        > anova(anova.model)
      Analysis of Variance Table

      Response: y
             Df Sum Sq Mean Sq F value    Pr(>F)    
   Type       3 189.54  63.179  14.117 3.059e-06 ***
   Residuals 36 161.12   4.475                      
   ---
   Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Another way to get this p-value would be:

        > summary(anova.model)

        Call:
        lm(formula = y ~ Type, data = data)

        Residuals:
        Min      1Q  Median      3Q     Max 
       -6.0185 -1.1301 -0.1111  1.2301  5.2855 

        Coefficients:
                    Estimate Std. Error t value Pr(>|t|)    
       (Intercept)  20.4338     0.6690  30.544  < 2e-16 ***
       TypeT2       -2.8160     0.9461  -2.976  0.00519 ** 
       TypeT3       -5.8928     0.9461  -6.229 3.44e-07 ***
       TypeT4       -1.4517     0.9461  -1.534  0.13367    
       ---
       Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

       Residual standard error: 2.116 on 36 degrees of freedom
       Multiple R-squared:  0.5405, Adjusted R-squared:  0.5022 
       F-statistic: 14.12 on 3 and 36 DF,  p-value: 3.059e-06

Since this only indicates that there is a difference among the groups without telling us which ones are significant, I decided to run a Tukey test:

       > TukeyHSD(aov(y ~ Type, data= data))
       Tukey multiple comparisons of means
       95% family-wise confidence level

       Fit: aov(formula = y ~ Type, data = data)

       $Type
                diff       lwr        upr     p adj
       T2-T1 -2.816021 -5.364065 -0.2679777 0.0255089
       T3-T1 -5.892842 -8.440886 -3.3447987 0.0000020
       T4-T1 -1.451716 -3.999759  1.0963275 0.4281661
       T3-T2 -3.076821 -5.624865 -0.5287776 0.0127180
       T4-T2  1.364305 -1.183738  3.9123487 0.4822404
       T4-T3  4.441126  1.893083  6.9891697 0.0002152

From the p-adj got in the Tukey test, one concludes that there is a significant statistical difference between T1 and T2, T1 and T3, T3 and T2, and between T4 and T3.

However, I got some questions regarding this procedure.

  1. What do the Pr(>|t|) values mean when using the summary command?

  2. Are the confidence intervals obtained in the Tukey test referring to the sample or population? For example, for the confidence interval between T3 and T4, I got $]1.893083,6.9891697[$. What does it literally mean? Does this mean that $95%$ of the mean differences between T3 and T4 in the sample are in that range of values, or are they saying that for the population?

  3. How can I get a confidence interval for each one of the treatments? For example, a $95%$ CI for T1, i.e, a CI that says that people in treatment 1 will have their values in that interval of values.


Get this bounty!!!

#StackBounty: #confidence-interval #variance #relative-risk Confidence interval for relative risk with uncertain incidences

Bounty: 50

The usual suggestion for computing a confidence interval for a relative risk estimate is to start from the variance:

$$
text{CI}_R = R pm z cdot exp sqrt{sigma^2_{log R}}
$$

where $R$ is the risk estimate, $z$ is the z-score for the desired confidence level, and
$$
sigma^2_{log R} = frac{|I| – A_I}{|I| cdot A_I} + frac{|C| – A_C}{|C| cdot A_C}
$$

where $|I|$ and $|E|$ are the sizes of the intervention and control groups, and $A_I$ and $A_C$ are the number of affected individuals in each group.

But suppose that $A_I$ and $A_C$ themselves are uncertain (e.g. because they are predictions of a model). We have values for $sigma^2_{A_I}$ and $sigma^2_{A_C}$ and we want to incorporate those uncertainties into the confidence interval. Assuming incidence rates in each group are independent, we can compute the variance of the variance as

$$
sigma^2[sigma^2_{log R}] = frac{partial^2 sigma^2_{log R}}{partial, A_I^{,2}} sigma^2_{A_I} + frac{partial^2 sigma^2_{log R}}{partial, A_C^{,2}} sigma^2_{A_C} = frac{2 sigma^2_{A_I}}{A_I^{,3}} + frac{2 sigma^2_{A_C}}{A_C^{,3}}
$$

And this is where I get stuck: I don’t know how to combine $sigma^2_{log R}$ with $sigma^2[sigma^2_{log R}]$ to get something to put into the original equation.

Any advice?


Get this bounty!!!

#StackBounty: #confidence-interval #standard-deviation #experiment-design #coefficient-of-variation #harmonic-mean Sample of rates: mea…

Bounty: 50

A performance test of a software application measures the maximum rate (operations per second), which this application can handle. The test is repeated multiple times each iteration yielding the determined maximum rate. So, we have a sample of rates to be analyzed statistically. It should be analyzed if the result is useful (maximum dispersion as per CoV, minimum confidence as per the size of the bootstrapped confidence interval) and the results should be summarized in a single number. The size of the sample is rather small (<= 10).

  1. AFAIK the harmonic mean should be used to summarize such rates instead of the arithmetic mean or am I wrong?
  2. Do I also have to use variants of other statistical measures considering the fact, that the sample consists of rates, i.e.: Are specialized formulas for the standard deviation and the coefficient of variation different from the commonly used formulas needed? In the common examples for the SD and the CoV the arithmetic mean (/ expected value) is used. Would I have to replace the arithmetic mean in these cases? I found this question, why I think, that specialized approaches are need, but I have much too little knowledge about statistics to judge my specific case.


Get this bounty!!!

#StackBounty: #confidence-interval #proportion #error-propagation How do I calculate error margins for difference in proportions?

Bounty: 50

I have data that captures responses to stimuli and categorises both the cue stimuli and response stimuli for both male and female, as follows:

Cue Stimulus Response Stimulus Gender Cue Count Cue Proportion Response Count Response Proportion
A A F 5,365 6.74% 7,565 9.5%
A A M 3,588 4.32% 6,102 10.40%
B B F 3,443 6.74% 4,669 5.86%
B B M 2,598 4.43% 3,177 5.42%

This means that Females have been given cue Stimulus category A 5,365 times (6.74% of female cues). Females have responded with Stimulus category A 7,565 times (9.5% of female responses). The overall number of cue-response pairs is $79,625 F$ and $58,645 M$. The responses were given by $37,188$ female and $24,732$ male participants.

I want to understand if Females or Males have tendencies to respond more frequently in any of the categories.

I have attempted to calculate an error margin for each row based on another question:

$$
se = sqrt{
frac{pq}{n}
}
$$

Would the error margin for 1.96 Confidence Interval be as follows, for the first row of data?

$$
E = pmsqrt{0.095*(1-0.095) / 79,625}*1.96 = pm0.2%
$$

I then calculated the difference between the cue proportions and the response proportions. e.g. the first line difference would be $9.5% – 6.74% = 2.76%$. Is it valid to use the error margin obtained above for this result? e.g. $2.76% pm 0.2%$


Get this bounty!!!

#StackBounty: #hypothesis-testing #statistical-significance #confidence-interval #confidence How to Find Confidence Level of Results

Bounty: 100

I am curious to find out the confidence level of results. It’s best to give an example of what I mean cause I struggle to speak in the technical terms.

Example:
Jack runs an advertisement that 1000 people see. 10 of those people convert and actually purchase something. Jack is sure his conversion rate is 1%.

Sally disagrees. She thinks Jack doesn’t have enough of a statistical confidence level in this result. What is the confidence level of 1% being the result.

A formula to get that would be great.

EDIT:
I’m trying to add a statistical element to my marketing for my business. I plan to run a lot of advertisements so am trying to find a way to understand how many results I need to collect before I can prove or disprove a hypothesis I am testing.

Let’s say for example my hypothesis is that running a certain ad will result in a conversion rate of 3% of people clicking the ad overtime.

For now, let’s say I have just started the experiment and out of 1000 people, 30 did click it. So it seems that the the conversion rate (or probability of a person click the ad) is 3% (0.03 in probability).

However, I shouldn’t trust these results because the sample size is too small. Instead I want to know what is probability that 3% (+ or – a margin of error) is indeed the conversion rate?

I thought I’d have to use a confidence intervals but now I’m thinking I should probably something that involves variance. I’ll do some more research and update this when I find more. If you have any thoughts in the mean time on how to solve this problem, please let me know. Much obliged.


Get this bounty!!!

#StackBounty: #confidence-interval #weighted-mean Better confidence intervals for weighted average

Bounty: 200

Suppose I have a large sequence of size $M$ which contains $K$ unique items, where item $k$ occurs with unknown probability $pi_k$. I can choose to measure its quality, $x_k$, which is constant for a given item $k$.

My goal is to estimate the average quality (i.e., the true weighted average as well as CI around it):

$$frac{1}{K}sum_{k=1}^K pi_k x_k$$

One plan is to get a uniform sample of items $J$ from this sequence, and compute the average over each sampled item (since item $k$ is sampled with probability $pi_k$):

$$frac{1}{|J|} sum_{j in J} x_j$$

and estimate the variance of the estimator using the usual CLT-based approach.

Suppose, however, it’s also easy to compute the total number of times each item occurs, $(n_1, …, n_K)$. Can I use this information to produce estimates with smaller confidence intervals?

Not to bias the potential answers, but I feel like it should be possible to do, since I will have more information about $pi$, and therefore should be able to do some sort of variance reduction technique.

Also, to work through a specific example, I’ve been using the following distribution which mimics my actual usecase.

import numpy as np

# Suppose we K unique items
K=10000
freq = np.array([K/(i+100) for i in range(K)])
true_pi = freq / sum(freq)
true_x = np.array([.8 - .4*i/K for i in range(K)])


Get this bounty!!!

#StackBounty: #confidence-interval #maximum-likelihood #quantiles #uncertainty #error-propagation Uncertainty propagation for the solut…

Bounty: 50

I have a dataset and I use Maximum Likelihood Estimation to estimate the values of the parameters of a weibull distribution. The MLE theory provides with theoretical Confidence Intervals (asymptotical, or for $n$ samples).

Then, I use the fitted Weibull distribution in an expression which is currently optimised numerically :

$Y(t_0) = h(t_0) . int_{0}^{t_0} S(t) dt + S(t_0)$

Where $t_0$ is unknown and $h$ and $S$ are the hazard function and the survival function of the distribution, and therefore are functions of the parameters.

I would like to propagate uncertainty on the fitted weibull parameters to estimate confidence intervales or quantiles for Y(t_0), how could I do that (numerically or analytically) ?
Thanks !


Get this bounty!!!

#StackBounty: #confidence-interval #poisson-distribution #bernoulli-process confidence intervals for the Poisson process (lambda) samp…

Bounty: 50

Say, I have a Poisson process which was measured $N$ times, and each measurement produced $k_i$ value.
Also, $k_i$ are events that I have to detect and my detection probability is $p$. In fact, I detect $widetilde{k}_i$ which correlates with $k_i$ via $p$.

Knowing, the number of the detected events $widetilde{k}_i$, the constant probability $p$, and the constant measurement time $T$, how I find the confidence interval for estimation of Poisson’s $lambda$?


Get this bounty!!!

#StackBounty: #confidence-interval #poisson-distribution #bernoulli-process confidence intervals for the Poisson process (lambda) samp…

Bounty: 50

Say, I have a Poisson process which was measured $N$ times, and each measurement produced $k_i$ value.
Also, $k_i$ are events that I have to detect and my detection probability is $p$. In fact, I detect $widetilde{k}_i$ which correlates with $k_i$ via $p$.

Knowing, the number of the detected events $widetilde{k}_i$, the constant probability $p$, and the constant measurement time $T$, how I find the confidence interval for estimation of Poisson’s $lambda$?


Get this bounty!!!