Interpreting R Bootstrapping Output: Confidence Intervals - r

I am trying to understand bootstrapping in R using the Boot package. I am trying to do a simple spearman rank correlation. I have some code based on a tutorial I found online but am having some issues interpreting the output. The code is below:
*Note: these data are just random numbers I used to help me learn how to run the boot function. They do not represent anything.
test_a=data.frame(v1 = c(1,5,8,3,2,9,5,10,3,5), v2 = c(3,4,7,2,1,10,3,8,8,2))
attach(test_a)
cor.test(v1, v2, method = "spearman")
function_2 = function(test_a, i) {
d2 = test_a[i,]
return(cor(d2$v1, d2$v2, method="spearman"))
}
set.seed(1)
test_boot = boot(test_a, function_2, R=1000)
test_boot
I get the following output:
boot(data = test_a, statistic = function_2, R = 1000)
Bootstrap Statistics :
original bias std. error
t1* 0.6397639 -0.04253283 0.2547429
Which all makes sense to me. But I guess my confusion is with the boot.ci function
ci = boot.ci(test_boot, conf=0.95)
I get the following output:
> ci
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 1000 bootstrap replicates
CALL :
boot.ci(boot.out = test_boot, conf = 0.95)
Intervals :
Level Normal Basic
95% ( 0.1830, 1.1816 ) ( 0.3173, 1.2987 )
Level Percentile BCa
95% (-0.0192, 0.9622 ) (-0.1440, 0.9497 )
Calculations and Intervals on Original Scale
And this is where I am a bit lost. I can't really find a source that explains in layman's terms in the context of a correlation coefficient, because obviously you cannot have a correlation > 1.0, yet this spits out a confidence interval (at least with two methods) that goes above 1. The sources that discuss these different confidence intervals frankly have been a bit confusing. Is there any one of these that is better for certain parameters than others? It is also possible I am completely misinterpreting what I am doing as well.
I also include the results of plot(test_boot) for your reference.
The eventual goal (with actual data) once I am more confident in running and interpreting results of bootstrapping would be to run tests for trends with time (Mann-Kendall test for trends and Thiel-Sen Slope Estimator, I cannot use parametric statistics with my data :/) and compare my observed dataset with bootstrapped samples.
Any help would really be appreciated. Thank you in advance!

The two top intervals are normal theory intervals. They use the bootstrap to calculate the standard error and then make symmetric intervals that may or may not respect the bounds of the statistic. The bottom two intervals are percentile intervals (the first is a raw percentile interval and the second is a bias-corrected, accelerated interval). These identify particular values of the bootstrap statistics that define the CI. As such, they will always respect the theoretical bounds of the statistic being bootstrapped.

Related

How does the R Survey package compute bootstrap confidence intervals?

I'm wondering how to use replication weights and the confint implementation of the survey package to construct bootstrap confidence intervals/standard errors.
Looking at the survey package's implementation of confint, it seems as though it's simply taking the standard error of the theta list generated after replication, and multiplying it by the statistic corresponding to a given alpha range.
But that doesn't really correspond with any bootstrap implementation I'm aware of. Typically, you'd instead be using a percentile of the distribution of theta to get the sample mean's distribution's confidence interval. The T and BCa intervals are another matter.
Here is my R Code. I have not used the provided weights, instead letting repweights weights be generated as "sample with replacement" weights with equal probability.
data(api)
d <- apiclus1 %>% select(fpc, dnum,api99)
dclus1<-svydesign(id=~dnum, data=d, fpc=~fpc)
rclus1<-as.svrepdesign(dclus1,type="bootstrap", replicates=100)
To test the confidence intervals, we can use:
test_mean <- svymean(~api99, rclus1)
confint(test_mean, df=degf(rclus1))
confint(test_mean, df=degf(rclus1)) - mean(d$api99)
Which results:
2.5 % 97.5 %
api99 554.2971 659.6592
2.5 % 97.5 %
api99 -52.68107 52.68107
So clearly the interval is symmetric, which defeats some of the purpose of using the bootstrap.
So let's try this:
test_bs <- withReplicates(rclus1, function(w, data) weighted.mean(data$api99, w), return.replicates=T)
This will bootstrap the replicates, where the weights are the repweights (which I assume are with replacement weights). Here are the intervals using BCa intervals on the replications:
bca(test$replicates) - mean(d$api99)
-43.2878 49.1148
Clearly not symmetric.
Using percentile intervals:
c(quantile(test$replicates, 0.025),quantile(test$replicates, 0.975)) - mean(d$api99)
2.5% 97.5%
-45.50944 48.06085
Valliant implements percentile intervals this way, which should be equivalent to my percentile intervals:
smho.boot.a <- as.svrepdesign(design = smho.dsgn,
type = "subbootstrap",
replicates = 500)
# total & CI for EOYCNT based on RW bootstrap
a1 <- svytotal( ̃EOYCNT, design = smho.boot.a,
na.rm=TRUE,
return.replicates = TRUE)
# Compute CI based on bootstrap percentile method.
ta1 <- quantile(a1$replicates, c(0.025, 0.975))
I'm looking for clarifications on
a. How to construct bootstrap CIs with survey package for statistics of interest
b. If my withReplication implementation of the bootstrap is correct
The percentile stuff doesn't actually work with multistage survey data (or, at least, it isn't known to work (or, at least, it isn't known to me to work)). Survey bootstraps just estimate the variance. You don't get the higher order accuracy for smooth functions that way, but you do get asymptotically the right variance. To get asymmetric intervals you need to bootstrap some suitable function of the parameter, as svyciprop and svyquantile (and in a sense svyglm) do.
If you assume simple random sampling with replacement of clusters then you could make the percentile bootstrap and extensions work, but that's not a common structure for real-world surveys (and it doesn't really need the survey package)
I'm sure the maintainer of the package would be happy to implement a bootstrap that gave better asymmetric intervals and worked for multistage samples, if someone pointed out suitable references to him.

99% confidence interval, proportion

Maybe a dumb question, but I just started, so thanks for any help.
I created 99% confidence interval for a proportion, but I'm not sure if it is correct, how can I make sure, (when we're calculating confidence interval for mean, we're using t-score, and we can test the results by using t.test function and degrees of freedom)
Is there any similar function to do the same thing for z, proportions? or I can do the same thing by using t.test?
There are a number of functions in R for computing confidence intervals for proportions. Personally, I like to use the binomCI function from the MKinfer package:
install.packages("MKinfer")
library(MKinfer)
x <- 50 # The number of "successes" in your data
n <- 100 # The number of observations in your data
binomCI(x, n, conf.level = 0.99, method = "wald")
Note however that the so-called Wald interval (which is presented in most introductory texts on statistics) that you probably computed usually is a poor choice. See this link for some other alternatives available through binomCI.

Generate beta-binomial distribution from existing vector

Is it possible to/how can I generate a beta-binomial distribution from an existing vector?
My ultimate goal is to generate a beta-binomial distribution from the below data and then obtain the 95% confidence interval for this distribution.
My data are body condition scores recorded by a veterinarian. The values of body condition range from 0-5 in increments of 0.5. It has been suggested to me here that my data follow a beta-binomial distribution, discrete values with a restricted range.
set1 <- as.data.frame(c(3,3,2.5,2.5,4.5,3,2,4,3,3.5,3.5,2.5,3,3,3.5,3,3,4,3.5,3.5,4,3.5,3.5,4,3.5))
colnames(set1) <- "numbers"
I see that there are multiple functions which appear to be able to do this, betabinomial() in VGAM and rbetabinom() in emdbook, but my stats and coding knowledge is not yet sufficient to be able to understand and implement the instructions provided on the function help pages, at least not in a way that has been helpful for my intended purpose yet.
We can look at the distribution of your variables, y-axis is the probability:
x1 = set1$numbers*2
h = hist(x1,breaks=seq(0,10))
bp = barplot(h$counts/length(x1),names.arg=(h$mids+0.5)/2,ylim=c(0,0.35))
You can try to fit it, but you have too little data points to estimate the 3 parameters need for a beta binomial. Hence I fix the probability so that the mean is the mean of your scores, and looking at the distribution above it seems ok:
library(bbmle)
library(emdbook)
library(MASS)
mtmp <- function(prob,size,theta) {
-sum(dbetabinom(x1,prob,size,theta,log=TRUE))
}
m0 <- mle2(mtmp,start=list(theta=100),
data=list(size=10,prob=mean(x1)/10),control=list(maxit=1000))
THETA=coef(m0)[1]
We can also use a normal distribution:
normal_fit = fitdistr(x1,"normal")
MEAN=normal_fit$estimate[1]
SD=normal_fit$estimate[2]
Plot both of them:
lines(bp[,1],dbetabinom(1:10,size=10,prob=mean(x1)/10,theta=THETA),
col="blue",lwd=2)
lines(bp[,1],dnorm(1:10,MEAN,SD),col="orange",lwd=2)
legend("topleft",c("normal","betabinomial"),fill=c("orange","blue"))
I think you are actually ok with using a normal estimation and in this case it will be:
normal_fit$estimate
mean sd
6.560000 1.134196

Bootstrap failed using mixed model in lme4 package

I want to use the bootMer() feature of the lme4 package using linear mixed model and also using boot.ci to get 95% CIs by parametric bootstrapping, and have been getting the warnings of the type "In bootMer(object, bootFun, nsim = nsim, ...) : some bootstrap runs failed (30/100)”.
My code is:
> lmer(LLA ~ 1 +(1|PopID/FamID), data=fp1) -> LLA
> LLA.boot <- bootMer(LLA, qst, nsim=999, use.u=F, type="parametric")
Warning message:
In bootMer(LLA, qst, nsim = 999, use.u = F, type = "parametric") :
some bootstrap runs failed (3/999)
> boot.ci(LLA.boot, type=c("norm", "basic", "perc"))
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 996 bootstrap replicates
CALL :
boot.ci(boot.out = LLA.boot, type = c("norm", "basic", "perc"))
Intervals :
Level Normal Basic Percentile
95% (-0.2424, 1.0637 ) (-0.1861, 0.8139 ) ( 0.0000, 1.0000 )
Calculations and Intervals on Original Scale
my problem is why Bootstrap fails for a few values ? and Confidence interval estimated using boot.ci at 95% show negative value, though there are no negative values in the array of values generated by bootstrap.'
The result of plot(LLA.boot):
It's not surprising, for a slightly difficult or unstable model, that a few parametric bootstrap runs might fail to converge for numerical reasons. You should be able to retrieve the specific error messages via attr(LLA.boot,"boot.fail.msgs") (this really should be documented, but isn't ...) In general I wouldn't worry about it too much if the failure fraction is very small (which it is in this case); if were large (say >5-10%) I would revisit my data and model and try to see if there was something else wrong that was manifesting itself in this way.
As for the confidence intervals: the "basic" and "norm" methods use Normal and bias-corrected Normal approximations, respectively, so it's not surprising that intervals should go beyond the range of the computed values. Since your function is
Qst <- function(x){
uu <- unlist(VarCorr(x))
uu[2]/(uu[3]+uu[2])}
}
its possible range is from 0 to 1, and your percentile bootstrap CI shows this range is attained. If your model were perfectly uninformative, the distribution of Qst would be uniform (mean=0.5, sd=sqrt(1/12)=0.288) and the Normal approximation to the CI would be
> 0.5+c(-1,1)*1.96*sqrt(1/12)
[1] -0.06580326 1.06580326
The upper end is about in the same place as your Normal CI, but your lower limit is even smaller, suggesting that there may even be some bimodality in the sampling distribution of your estimate (this is confirmed by the distribution plot you posted). In any case, I suspect that the bottom line is that your confidence intervals (however computed) are so wide that they're telling you that your data provide almost no practical information about the value of Qst ... In particular, it looks like the majority of your bootstrap replicates are finding singular fits, in which one or the other of the variances are estimated as zero. I'm guessing your data set is just not large enough to estimate these variances very precisely.
For more information on how the Normal and bias-corrected Normal approximations are computed, see boot:::basic.ci and boot:::norm.ci or chapter 5 of Davison and Hinkley as cited in ?boot.ci.

adjusted bootstrap confidence intervals (BCa) with parametric bootstrap in boot package

I am attempting to use boot.ci from R's boot package to calculate bias- and skew-corrected bootstrap confidence intervals from a parametric bootstrap. From my reading of the man pages and experimentation, I've concluded that I have to compute the jackknife estimates myself and feed them into boot.ci, but this isn't stated explicitly anywhere. I haven't been able to find other documentation, although to be fair I haven't looked at the original Davison and Hinkley book on which the code is based ...
If I naively run b1 <- boot(...,sim="parametric") and then boot.ci(b1), I get the error influence values cannot be found from a parametric bootstrap. This error occurs if and only if I specify type="all" or type="bca"; boot.ci(b1,type="bca") gives the same error. So does empinf(b1). The only way I can get things to work is to explicitly compute jackknife estimates (using empinf() with the data argument) and feed these into boot.ci.
Construct data:
set.seed(101)
d <- data.frame(x=1:20,y=runif(20))
m1 <- lm(y~x,data=d)
Bootstrap:
b1 <- boot(d$y,
statistic=function(yb,...) {
coef(update(m1,data=transform(d,y=yb)))
},
R=1000,
ran.gen=function(d,m) {
unlist(simulate(m))
},
mle=m1,
sim="parametric")
Fine so far.
boot.ci(b1)
boot.ci(b1,type="bca")
empinf(b1)
all give the error described above.
This works:
L <- empinf(data=d$y,type="jack",
stype="i",
statistic=function(y,f) {
coef(update(m1,data=d[f,]))
})
boot.ci(b1,type="bca",L=L)
Does anyone know if this is the way I'm supposed to be doing it?
update: The original author of the boot package responded to an e-mail:
... you are correct that the issue is that you are doing a
parametric bootstrap. The bca intervals implemented in boot are
non-parametric intervals and this should have been stated
explicitely somewhere. The formulae for parametric bca intervals
are not the same and depend on derivatives of the least favourable
family likelihood when there are nuisance parameters as in your
case. (See pp 206-207 in Davison & Hinkley) empinf assumes that the
statistic is in one of forms used for non-parametric bootstrapping
(which you did in your example call to empinf) but your original
call to boot (correctly) had the statistic in a different form
appropriate for parametric resampling.
You can certainly do what you're doing but I am not sure of the
theoretical properties of mixing parametric resampling with
non-parametric interval estimation.
After looking at the boot.ci page I decided to use a boot-object constructed along the lines of an example in Ch 6 of Davison and Hinkley and see whether it generated the errors you observed. I do get a warning but no errors.:
require(boot)
lmcoef <- function(data, i){
d <- data[i, ]
d.reg <- lm(y~x, d)
c(coef(d.reg)) }
lmboot <- boot(d, lmcoef, R=999)
m1
boot.ci(lmboot, index=2) # I am presuming that the interest is in the x-coefficient
#----------------------------------
BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
Based on 999 bootstrap replicates
CALL :
boot.ci(boot.out = lmboot, index = 2)
Intervals :
Level Normal Basic
95% (-0.0210, 0.0261 ) (-0.0236, 0.0245 )
Level Percentile BCa
95% (-0.0171, 0.0309 ) (-0.0189, 0.0278 )
Calculations and Intervals on Original Scale
Warning message:
In boot.ci(lmboot, index = 2) :
bootstrap variances needed for studentized intervals

Resources