Equation for 95% CI on regression? - r

I have calculated/plotted a linear and a 95% CI on the model parameters as follows
lm <- lm(cars$speed~cars$dist)
conf <- predict(lm, interval='confidence')
conf <- cbind(cars,conf)
CI <- as.data.frame(confint(lm))
library(ggplot2)
plot<-ggplot(conf,aes(dist,speed)) +
geom_line(aes(y=fit),color='black') +
geom_line(aes(y=lwr),color='red',linetype='dashed') +
geom_line(aes(y=upr),color='red',linetype='dashed')
plot
I am wondering what the equation is to calculate lower and upper limits (the red lines) on the plot? I assumed these could be calculated using the values from the confint() function?
I tried calculating the lwr and upr values like so but I did not get the same result.
lower <- CI[1,1] + CI[2,1]*cars$dist
upper <- CI[1,2] + CI[2,2]*cars$dist

Here is how the confidence interval is calculated in lm.predict using the following equation:
which can be implemented as follows:
my.lm <- lm(cars$speed~cars$dist)
intercept <- model.matrix(delete.response(terms(my.lm)), cars)
fit.values <- c(intercept %*% coef(my.lm))
data.fit <- data.frame(x=cars$dist, fit=fit.values)
# compute t-value
tval <- qt((1-0.95)/2, df=nrow(data.fit)-2)
# compute Sxx
Sxx <- sum((data.fit$x - mean(data.fit$x))^2)
# compute MSres
MSres <- sum(my.lm$residuals^2)/(nrow(data.fit)-2)
# calculate confidence interval
CI <- data.frame(t(apply(data.fit, 1, FUN = function(row){
sqrt(MSres * (1/nrow(data.fit) + (as.numeric(row[1]) - mean(data.fit$x))^2/Sxx)) * tval * c(1, -1) + as.numeric(row[2])
})))
names(CI) <- c("lwr","upr")
head(CI)
# lwr upr
#1 6.917090 10.31299
#2 8.472965 11.40620
#3 7.307526 10.58483
#4 10.764584 13.08820
#5 9.626909 12.23906
#6 8.472965 11.40620
You may compare the results with the ones you obtained from predict.
Hope it helps.

Related

How to perform bootstrapping for estimation and inference of quantile regression using multiply imputed data in R?

I am trying to manually pool results from quantile regression models run on multiply imputed data in R using mice. I make use of a bootstrapping procedure to get 95% CIs and P values of the model terms, in which model parameters and their standard errors are obtained after sampling a certain number of rows that is equal to the unique number of participants in my data set. This procedure is repeated 500 times for each of the m imputed data sets. Then, as a last step, I pool the estimated coefficients and their standard errors of the resulting 500 * m regression models according to Rubin's rules (1987) (see e.g. https://bookdown.org/mwheymans/bookmi/rubins-rules.html). To speed things up, I use foreach to split up the analyses over multiple processor cores and for to loop over the m imputed data sets.
However, there seems to be a flaw in the part wherein the results are pooled. When I look at the pooled results, I observe that the P values are not in accordance with the 95% CIs (e.g. P < 0.05 when 0 is included in the 95% CI).
To illustrate this issue, I have made a reproducible example, using these publicly available data: https://archive.ics.uci.edu/ml/machine-learning-databases/00519/heart_failure_clinical_records_dataset.csv
Because there are no missing data in this data set, I introduce them myself and impute the data (m = 10 multiply imputed data sets with 20 iterations). I use set.seed for reproducibility.
Note that I use lm instead of quantreg::rq in this example.
# load data
projdir <- "my_directory"
d <- read.csv(file = file.path(projdir, 'heart_failure_clinical_records_dataset.csv'))
#### introduce missing values
set.seed(1)
# age
age_miss_tag <- rbinom(nrow(d), 1, 0.3)
d$age[age_miss_tag == 1] <- NA # MCAR
# serum creatinine
creat_miss_tag <- rbinom(nrow(d), 1, 0.3)
d$serum_creatinine[creat_miss_tag == 1 & d$anaemia == 0] <- NA # MAR
# CK
CK_miss_tag <- rbinom(nrow(d), 1, 0.3)
d$creatinine_phosphokinase[CK_miss_tag & d$platelets > median(d$platelets)] <- NA # MAR
# platelets
platelets_miss_tag <- rbinom(nrow(d), 1, 0.3)
d$platelets[platelets_miss_tag == 1] <- NA # MCAR
library(mice); library(mitml); library(miceadds); library(splines); library(foreach); library(doParallel)
# impute data
imp <- mice(d, maxit = 20, m = 10, seed = 2)
# log creatinine
implong <- complete(imp, 'long', include = FALSE)
implong$log_creat <- log(implong$serum_creatinine)
imp <- miceadds::datlist2mids(split(implong, implong$.imp))
# compute values for Boundary.knots
temp <- complete(imp, 'long', include = FALSE)
B_knots <- rowMeans(sapply(split(temp, temp$.imp), function(x) {
quantile(x$age, c(0.1, 0.9))
}))
# Convert mids object into a datlist
longlist <- miceadds::mids2datlist(imp)
# fit model based on origial data and use the terms in the below foreach loop
# in order to fix the position of the inner knots
fit_orig <- lm(log_creat ~
# Main effects
ns(age, df = 2, B = c(B_knots[1], B_knots[2])) * sex,
data = longlist[[1]])
To further speed things up, I use OLS instead of quantile regression here and parallelize the process.
# make cluster used in foreach
cores_2_use <- detectCores() - 1
cl <- makeCluster(cores_2_use)
clusterSetRNGStream(cl, iseed = 9956)
registerDoParallel(cl)
# No. of bootstrap samples to be taken
n_iter <- 500
boot.1 <- c()
for(k in seq_along(longlist)){
boot.1[[k]] <- foreach(i = seq_len(n_iter),
.combine = rbind,
.packages = c('mice', 'mitml', 'splines')) %dopar% {
# store data from which rows can be samples
longlist0 <- longlist[[k]]
# set seed for reproducibility
set.seed(i)
# sample rows
boot_dat <- longlist0[sample(1:nrow(longlist0), replace = TRUE), ]
# linear regression model based on sampled rows
fit1 <- lm(terms(fit_orig), data = boot_dat)
# save coefficients
fit1$coef
}
}
stopCluster(cl)
As a last step, I pool the results according to Rubin's rules.
n_cols <- dim(boot.1[[1]])[2]
list <- c()
for(i in seq_len(n_cols)) {
# extract coefficients
parameter <- lapply(boot.1, function(x){
x[,i]
})
m <- length(parameter)
for(k in seq_len(m)) {
names(parameter[[k]]) <- NULL
}
Q <- sapply(parameter, mean)
U <- sapply(parameter, var) # (standard error of estimate)^2
#### Pooling
# Pooled univariate estimate
qbar <- mean(Q)
# Mean of the variances (i.e. the pooled within-imputation variance)
ubar <- mean(U)
# Between-imputation variance
btw_var <- var(Q)
# Total variance of the pooled estimated
tot_var <- ubar + btw_var + (btw_var / m)
# Relative increase in variance due to non-response
r_var <- (btw_var + (btw_var / m)) / ubar
# Fraction of missing information
lambda <- (btw_var + (btw_var / m)) / tot_var
# degrees of freedom for the t-distribution according to Rubin (1987)
df_old <- (m - 1) / lambda^2
# sample size in the imputed data sets
n_sample <- nrow(longlist[[1]])
# observed degrees of freedom
df_observed <- (((n_sample - n_cols) + 1) / ((n_sample - n_cols) + 3)) *
(n_sample - n_cols) * (1 - lambda)
# adjusted degrees of freedom according to Barnard & Rubin (1999)
df_adjusted <- (df_old * df_observed) / (df_old + df_observed)
# 95% confidence interval of qbar
lwr <- qbar - qt(0.975, df_adjusted) * sqrt(tot_var)
upr <- qbar + qt(0.975, df_adjusted) * sqrt(tot_var)
# F statistic
q <- ((0 - qbar)^2 / tot_var)^2
# Significance level associated with the null value Q[0]
p_value <- pf(q, df1 = 1, df2 = df_adjusted, lower.tail = FALSE)
list[[i]] <- cbind(qbar, lwr, upr, p_value)
}
names(list) <- colnames(boot.1[[1]])
list
Obviously, the P value shown below is not in accordance with the 95% CI (as 0 is included in the CI, so the P value should be ≥0.05).
> list
$`(Intercept)`
qbar lwr upr p_value
[1,] 0.06984595 -0.02210231 0.1617942 0.008828337
EDIT (29 Dec 2021)
As #Gerko Vink notes in his answer, multiple imputation and bootstrapping both induce variance. The variance induced by imputation is taken care of by Rubin's rules, the bootstrap variance is not. Unfortunately, mice::pool will not work with the output returned by quantreg::rq.
I am aware of constructing bootstrap CIs based on a naive percentile-based approach as shown in this post, but I am inclined to think this is not the correct approach to proceed with.
Does anyone know how to appropriately take care of the extra variance induced by bootstrapping when using rq?
EDIT (30 Dec 2021)
Inspired by this recent post, I decided not to hit the road of bootstrapping anymore, but instead manually extract the point estimates and variances from each of the imputed data sets and pool them using Rubin's rules. I have posted this approach as answer below. Any input on how to appropriately take care of the extra variance induced by bootstrapping when using rq is still very welcome though!
Bootstrapping and multiple imputation both induce variance. The imputation variance is taken care of by Rubin's rules for parameters with normal sampling distributions. The bootstrap variance is not.
Two remarks:
First, there is a small error in your code. You're calculating the bootstrap variance about Q in U <- sapply(parameter, var). No need for U <- U/n_iter. U is already the variance and sapply(parameter, sd) would yield the bootstrapped standard error.
Second, you're using bootstrap parameters to calculate a parametric interval and p-value. That seems needlessly complicated and, as you can see, potentially problematic. Why not calculate the bootstrap CI?
See also this link for some inspiration with respect to different means of calculating the CIs and their respective validity.
A small sim that demonstrates that you cannot expect both to be identical for a finite set of bootstrap replications.
library(purrr)
library(magrittr)
#fix seed
set.seed(123)
#some data
n = 1000
d <- rnorm(n, 0, 1)
# ci function
fun <- function(x){
se <- var(x)/length(x)
lwr <- mean(x) - 1.96 * se
upr <- mean(x) + 1.96 * se
ci <- c(lwr, upr)
return(ci)
}
# bootstrap
boot <- replicate(500,
d[sample(1:1000, 1000, replace = TRUE)],
simplify = FALSE)
# bootstrapped ci's based on parameters
boot.param.ci <- boot %>%
map(~.x %>% fun) %>%
do.call("rbind", args = .)
# bootstrap CI
boot.ci <- boot %>%
map(~.x %>% mean) %>%
unlist %>%
quantile(c(.025, .975))
# Overview
data.frame(param = fun(d),
boot.param = boot.param.ci %>% colMeans,
boot.ci = boot.ci)
#> param boot.param boot.ci
#> 2.5% 0.01420029 0.01517527 -0.05035913
#> 97.5% 0.01805545 0.01904181 0.07245449
Created on 2021-12-22 by the reprex package (v2.0.1)
The following reprex also demonstrates that the bootstrap applied to the imputed data yields different variance estimates under the same pooling rules.
library(purrr)
library(magrittr)
library(mice)
#fix seed
set.seed(123)
imp <- mice(boys,
m = 10,
printFlag = FALSE)
imp %>%
complete("all") %>%
map(~.x %$%
lm(age ~ hgt + hc)) %>%
pool %>%
summary(conf.int = TRUE)
#> term estimate std.error statistic df p.value 2.5 %
#> 1 (Intercept) -1.9601179 0.809167659 -2.422388 682.5182 0.01567825 -3.5488747
#> 2 hgt 0.1690468 0.002784939 60.700342 572.1861 0.00000000 0.1635768
#> 3 hc -0.2138941 0.021843724 -9.792018 639.0432 0.00000000 -0.2567883
#> 97.5 %
#> 1 -0.3713610
#> 2 0.1745167
#> 3 -0.1710000
imp %>%
complete("all") %>%
map(~.x %>%
.[sample(1:748, 748, replace = TRUE), ] %$%
lm(age ~ hgt + hc)) %>%
pool %>%
summary(conf.int = TRUE)
#> term estimate std.error statistic df p.value 2.5 %
#> 1 (Intercept) -1.9810146 1.253312293 -1.580623 22.57546 1.278746e-01 -4.5763892
#> 2 hgt 0.1689181 0.004124538 40.954423 24.47123 0.000000e+00 0.1604141
#> 3 hc -0.2133606 0.033793045 -6.313743 22.29686 2.217445e-06 -0.2833890
#> 97.5 %
#> 1 0.6143599
#> 2 0.1774221
#> 3 -0.1433322
Created on 2021-12-22 by the reprex package (v2.0.1)
For quantile regression, mice::pool will not work with the output returned by quantreg::rq, because (according to this post) there is no agreed upon method to calculate standard errors, which are required to pool results under multiple imputation.
An ad hoc solution would be to manually extract the point estimates and variances from each of the imputed data sets and pool them using Rubin's rules.
First, a reprex using lm to verify whether results from the manual approach and mice::pool are identical.
library(mice)
imp <- mice(nhanes, print = FALSE, seed = 123)
# fit linear model
fit <- with(imp, lm(bmi ~ chl + hyp))
# manually pool univariate estimates using Rubin's rules
pool_manual <- \(model_object) {
m <- length(model_object$analyses)
Q <- sapply(model_object$analyses, \(x) summary(x)$coefficients[, 'Estimate'])
U <- sapply(model_object$analyses, \(x) (summary(x)$coefficients[, 'Std. Error'])^2)
qbar <- rowMeans(Q)
ubar <- rowMeans(U)
btw_var <- apply(Q, 1, var)
tot_var <- ubar + btw_var + (btw_var / m)
lambda <- (btw_var + (btw_var / m)) / tot_var
df_old <- (m - 1) / lambda^2
n_sample <- length(residuals(model_object$analyses[[1]]))
n_cols <- dim(Q)[1]
df_com <- n_sample - n_cols
df_observed <- ((df_com + 1) / (df_com + 3)) * df_com * (1 - lambda)
df_adjusted <- (df_old * df_observed) / (df_old + df_observed)
lwr <- qbar - qt(0.975, df_adjusted) * sqrt(tot_var)
upr <- qbar + qt(0.975, df_adjusted) * sqrt(tot_var)
q <- (0 - qbar)^2 / tot_var
p_value <- pf(q, df1 = 1, df2 = df_adjusted, lower.tail = FALSE)
df <- data.frame(noquote(rownames(Q)), qbar, lwr, upr, p_value)
rownames(df) <- NULL
names(df) <- c('term', 'estimate', '2.5 %', '97.5 %', 'p.value')
return(df)
}
Verify.
> pool_manual(fit)
term estimate 2.5 % 97.5 % p.value
1 (Intercept) 21.78583831 8.99373786 34.57793875 0.004228746
2 chl 0.03303449 -0.02812005 0.09418903 0.254696358
3 hyp -1.07291395 -5.57406829 3.42824039 0.624035769
> extract <- c('term', 'estimate', '2.5 %', '97.5 %', 'p.value')
> summary(pool(fit), conf.int = TRUE)[, extract]
term estimate 2.5 % 97.5 % p.value
1 (Intercept) 21.78583831 8.99373786 34.57793875 0.004228746
2 chl 0.03303449 -0.02812005 0.09418903 0.254696358
3 hyp -1.07291395 -5.57406829 3.42824039 0.624035769
Quantile regression
Now, let's pool results from rq for the expected median of the outcome.
library(quantreg)
# fit quantile regression model
fit <- with(imp, rq(bmi ~ chl + hyp, tau = 0.5))
To be able to pool results from rq, only the summary method used to extract point estimates and variances from each of the imputed data sets needs to be adjusted in pool_manual.
Q <- sapply(model_object$analyses, \(x) summary.rq(x, covariance = TRUE)$coefficients[, 'Value'])
U <- sapply(model_object$analyses, \(x) (summary.rq(x, covariance = TRUE)$coefficients[, 'Std. Error'])^2)
Result
> pool_manual(fit)
term estimate 2.5 % 97.5 % p.value
1 (Intercept) 22.23452856 0.8551626 43.6138945 0.04461337
2 chl 0.03487894 -0.0857199 0.1554778 0.47022312
3 hyp -1.43636147 -6.0666990 3.1939761 0.52455041
> summary(pool(fit), conf.int = TRUE)[, extract]
Error in rq.fit.br(x, y, tau = tau, ci = TRUE, ...) :
unused arguments (effects = "fixed", parametric = TRUE, exponentiate = FALSE)

Stata vs. R: Delta Method provides different results for relative risk SE's from logit model

I've been trying to estimate the conditional mean treatment effect of covariates in a logit regression (using relative-risk) along with their standard errors for inference purposes. The delta method is necessary to calculated the standard errors for these treatment effects. I've been trying to recreate these results using the Stata user written command, adjrr, to calculate the relative risk with standard errors and confidence intervals in R.
The adjrr command in Stata calculates the adjusted relative-risk (the conditional mean treatment effect of interest for my project) and it's SE's using the delta method. The deltamethod command in R should create the same results, however this is not the case.
How can I replicate the results from Stata in R?
I used the following self generated data: (https://migariane.github.io/DeltaMethodEpiTutorial.nb.html).
R code below:
generateData <- function(n, seed){
set.seed(seed)
age <- rnorm(n, 65, 5)
age65p <- ifelse(age>=65, T, F)
cmbd <- rbinom(n, size=1, prob = plogis(1 - 0.05 * age))
Y <- rbinom(n, size=1, prob = plogis(1 - 0.02* age - 0.02 * cmbd))
data.frame(Y, cmbd, age, age65p)
}
# Describing the data
data <- generateData(n = 1000, seed = 777)
str(data)
logfit <- glm(Y ~ age65p + cmbd, data = data, family = binomial)
summary(logfit)
p1 <- predict(logfit, newdata = data.frame(age65p = T, cmbd = 0), type="response")
p0 <- predict(logfit, newdata = data.frame(age65p = F, cmbd = 0), type="response")
rr <- p1 / p0
rr
0.8123348 #result
library(msm)
se_rr_delta <- deltamethod( ~(1 + exp(-x1)) / (1 + exp(-x1 -x2)), coef(logfit), vcov(logfit))
se_rr_delta
0.6314798 #result
Stata Code (using same data):
logit Y i.age65p i.cmbd
adjrr age65p
//results below
R1 = 0.3685 (0.0218) 95% CI (0.3259, 0.4112)
R0 = 0.4524 (0.0222) 95% CI (0.4090, 0.4958)
ARR = 0.8146 (0.0626) 95% CI (0.7006, 0.9471)
ARD = -0.0839 (0.0311) 95% CI (-0.1449, -0.0229)
p-value (R0 = R1): 0.0071
p-value (ln(R1/R0) = 0): 0.0077

Predict using felm output with standard errors

Is there way to get predict behavior with standard errors from lfe::felm if the fixed effects are swept out using the projection method in felm? This question is very similar to the question here, but none of the answers to that question can be used to estimate standard errors or confidence/prediction intervals. I know that there's currently no predict.felm, but I am wondering if there are workarounds similar to those linked above that might also work for estimating the prediction interval
library(DAAG)
library(lfe)
model1 <- lm(data = cps1, re74 ~ age + nodeg + marr)
predict(model1, newdata = data.frame(age=40, nodeg = 0, marr=1), se.fit = T, interval="prediction")$fit
# Result: fit lwr upr
# 1 18436.18 2339.335 34533.03
model2 <- felm(data = cps1, re74 ~ age | nodeg + marr)
predict(model2, newdata = data.frame(age=40, nodeg = 0, marr=1), se.fit = T, interval="prediction")$fit
# Does not work
The goal is to estimate a prediction interval for yhat, for which I think I'd need to compute the full variance-covariance matrix (including the fixed effects). I haven't been able to figure out how to do this, and I'm wondering if it's even computationally feasible.
After conversations with several people, I don't believe it is possible to obtain an estimate the distribution of yhat=Xb (where X includes both the covariates and the fixed effects) directly from felm, which is what this question boils down to. It is possible bootstrap them, however. The following code does so in parallel. There is scope for performance improvements, but this gives the general idea.
Note: here I do not compute full prediction interval, just the SEs on Xb, but obtaining the prediction interval is straightforward - just add the root of sigma^2 to the SE.
library(DAAG)
library(lfe)
library(parallel)
model1 <- lm(data = cps1, re74 ~ age + nodeg + marr)
yhat_lm <- predict(model1, newdata = data.frame(age=40, nodeg = 0, marr=1), se.fit = T)
set.seed(42)
boot_yhat <- function(b) {
print(b)
n <- nrow(cps1)
boot <- cps1[sample(1:n, n, replace=T),]
lm.model <- lm(data=demeanlist(boot[, c("re74", "age")], list(factor(boot$nodeg), factor(boot$marr))),
formula = re74 ~ age)
fe <- getfe(felm(data = boot, re74 ~ age | nodeg + marr))
bootResult <- predict(lm.model, newdata = data.frame(age = 40)) +
fe$effect[fe$fe == "nodeg" & fe$idx==0] +
fe$effect[fe$fe == "marr" & fe$idx==1]
return(bootResult)
}
B = 1000
yhats_boot <- mclapply(1:B, boot_yhat)
plot(density(rnorm(10000, mean=yhat_lm$fit, sd=yhat_lm$se.fit)))
lines(density(yhats), col="red")
From your first model predict(.) yields this:
# fit lwr upr
# 1 18436.18 2339.335 34533.03
Following 李哲源 we can achieve these results manually, too.
beta.hat.1 <- coef(model1) # save coefficients
# model matrix: age=40, nodeg = 0, marr=1:
X.1 <- cbind(1, matrix(c(40, 0, 1), ncol=3))
pred.1 <- as.numeric(X.1 %*% beta.hat.1) # prediction
V.1 <- vcov(model1) # save var-cov matrix
se2.1 <- unname(rowSums((X.1 %*% V.1) * X.1)) # prediction var
alpha.1 <- qt((1-0.95)/2, df = model1$df.residual) # 5 % level
pred.1 + c(alpha.1, -alpha.1) * sqrt(se2.1) # 95%-CI
# [1] 18258.18 18614.18
sigma2.1 <- sum(model1$residuals ^ 2) / model1$df.residual # sigma.sq
PI.1 <- pred.1 + c(alpha.1, -alpha.1) * sqrt(se2.1 + sigma2.1) # prediction interval
matrix(c(pred.1, PI.1), nrow = 1, dimnames = list(1, c("fit", "lwr", "upr")))
# fit lwr upr
# 1 18436.18 2339.335 34533.03
Now, your linked example applied to multiple FE, we get this results:
lm.model <- lm(data=demeanlist(cps1[, c(8, 2)],
list(as.factor(cps1$nodeg),
as.factor(cps1$marr))), re74 ~ age)
fe <- getfe(model2)
predict(lm.model, newdata = data.frame(age = 40)) + fe$effect[fe$idx=="1"]
# [1] 15091.75 10115.21
The first value is with and the second without added FE (try fe$effect[fe$idx=="1"]).
Now we're following the manual approach above.
beta.hat <- coef(model2) # coefficient
x <- 40 # age = 40
pred <- as.numeric(x %*% beta.hat) # prediction
V <- model2$vcv # var/cov
se2 <- unname(rowSums((x %*% V) * x)) # prediction var
alpha <- qt((1-0.95)/2, df = model2$df.residual) # 5% level
pred + c(alpha, -alpha) * sqrt(se2) # CI
# [1] 9599.733 10630.697
sigma2 <- sum(model2$residuals ^ 2) / model2$df.residual # sigma^2
PI <- pred + c(alpha, -alpha) * sqrt(se2 + sigma2) # PI
matrix(c(pred, PI), nrow = 1, dimnames = list(1, c("fit", "lwr", "upr"))) # output
# fit lwr upr
# 1 10115.21 -5988.898 26219.33
As we see, the fit is the same as the linked example approach, but now with prediction interval. (Disclaimer: The logic of the approach should be straightforward, the values of the PI should still be evaluated, e.g. in Stata with reghdfe.)
Edit: In case you want to achieve exactly the same output from felm() which predict.lm() yields with the linear model1, you simply need to "include" again the fixed effects in your model (see model3 below). Just follow the same approach then. For more convenience you easily could wrap it into a function.
library(DAAG)
library(lfe)
model3 <- felm(data = cps1, re74 ~ age + nodeg + marr)
pv <- c(40, 0, 1) # prediction x-values
predict0.felm <- function(mod, pv.=pv) {
beta.hat <- coef(mod) # coefficient
x <- cbind(1, matrix(pv., ncol=3)) # prediction vector
pred <- as.numeric(x %*% beta.hat) # prediction
V <- mod[['vcv'] ] # var/cov
se2 <- unname(rowSums((x %*% V) * x)) # prediction var
alpha <- qt((1-0.95)/2, df = mod[['df.residual']]) # 5% level
CI <- structure(pred + c(alpha, -alpha) * sqrt(se2),
names=c("CI lwr", "CI upr")) # CI
sigma2 <- sum(mod[['residuals']] ^ 2) / mod[['df.residual']] # sigma^2
PI <- pred + c(alpha, -alpha) * sqrt(se2 + sigma2) # PI
mx <- matrix(c(pred, PI), nrow = 1,
dimnames = list(1, c("PI fit", "PI lwr", "PI upr"))) # output
list(CI, mx)
}
predict0.felm(model3)[[2]]
# PI fit PI lwr PI upr
# 1 18436.18 2339.335 34533.03
By this with felm() you can achieve the same prediction interval as with predict.lm().

What formula does prop.test use?

The prop.test function apparently doesn't use the formula given here to create a confidence interval, so what formula is being used? Below is a confidence interval CI computed with prop.test and a confidence interval CI.2 computed using the formula given here.
CI <- prop.test(5,10)$conf.int
se.hat <- 0.5/sqrt(10)
z <- qnorm(0.975)
CI.2 <- 0.5 + c(-1,1)*z*se.hat
CI
CI.2 # not the same
It uses the Wilson score interval with continuity correction, i.e. the Yates chi-squared test.
We can confirm Ryan's answer comparing the results from IC.wc and prop.test using the example given below:
IC.wc <- function(x, n, conf.level=0.95){
p <- x/n ; q <- 1-p
alpha <- 1- conf.level
z <- qnorm(p=alpha/2, lower.tail=F)
const1 <- z * sqrt(z^2 - 2 - 1/n + 4*p*(n*q+1))
const2 <- z * sqrt(z^2 + 2 - 1/n + 4*p*(n*q-1))
L <- (2*n*p + z^2 - 1 - const1) / (2*(n+z^2))
U <- (2*n*p + z^2 + 1 + const2) / (2*(n+z^2))
c(L, U)
}
IC.wc(x=35, n=50)
prop.test(x=35, n=50, correct=TRUE)$conf.int
I figure out a way to get a CI exactly the same as by formula:
library(BSDA)
x=35
n=50
phat=x/n
xvar=c(rep(1,x),rep(0,n-x))## replicate the original variable!
s=sqrt(phat*(1-phat))
z.test(xvar,sigma.x=s)

Bootstrap parameter estimate of non-linear optimization in R: Why is it different than the regular parameter estimate?

Here's the short version of my question. The code is below.
I calculated the parameters for the non-linear von Bertalanffy growth equation in R using optim(), and now I am trying to add 95% confidence intervals to the von B growth coefficient K by bootstrapping. For at least one of the years of data, when I summarize the bootstrapped output of the growth coefficient K, the mean and median parameter estimates from bootstrapping are quite different than the estimated parameter:
>summary(temp.store) # summary of bootstrap values
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.002449 0.005777 0.010290 0.011700 0.016970 0.056720
> est.K [1] 0.01655956 # point-estimate from the optimization
I suspect the discrepancy is because there are errors in the bootstrap of the random draw that bias the result, although I have used try() to stop the optimization from crashing when there is a combination of input values that cause an error. So I would like to know what to do to fix that issue. I think I'm doing things correctly, because the fitted curve looks right.
Also, I have run this code for data from other years, and in at least one other year, the bootrap estimate and the regular estimate are very close.
Long-winded version:
The von Bertalanffy growth curve (VBGC) for length is given by:
L(t) = L.inf * [1 - exp(-K*(t-t0))] (Eq. 3.1.0.1, from FAO)
where L(t) is the fish's length, L.inf is the asymptotic maximum length, K is the growth coefficient, t is the time step and t0 is when growth began. L(t) and t are the observed data. Usually time or age is measured in years, but here I am looking at juvenile fish data and I have made t the day the of year ("doy") starting with January 1 = 1.
To estimate the starting parameters for the optimization, I have used a linearization of the VBGC equation.
doy <- c(156,205,228,276,319,380)
len <- c(36,56,60,68,68,71)
data06 <- data.frame(doy,len)
Function to get starting parameters for the optimization:
get.init <-function(dframe){ # linearization of the von B
l.inf <- 80 # by eyeballing max juvenile fish
# make a response variable and store it in the data frame:
# Eqn. 3.3.3.1 in FAO document
dframe$vonb.y <- - log(1 - (dframe$len)/l.inf )
lin.vonb <- lm(vonb.y ~ doy, data=dframe)
icept <- lin.vonb$coef[1] # 0.01534013 # intercept is a
slope <- k.lin <- lin.vonb$coef[2] # slope is the K param
t0 <- - icept/slope # get t0 from this relship: intercept= -K * t0
pars <- c(l.inf,as.numeric(slope),as.numeric(t0))
}
Sums of squares for von Bertalanffy growth equation
vbl.ssq <- function(theta, data){
linf=theta[1]; k=theta[2]; t0=theta[3]
# name variables for ease of use
obs.length=data$len
age=data$doy
#von B equation
pred.length=linf*(1-exp(-k*(age-t0)))
#sums of squares
ssq=sum((obs.length-pred.length)^2)
}
Estimate parameters
#Get starting parameter values
theta_init <- get.init(dframe=data06)
# optimize VBGC by minimizing sums of square differences
len.fit <- optim(par=theta_init, fn=vbl.ssq, method="BFGS", data=data06)
est.linf <- len.fit$par[1] # vonB len-infinite
est.K <- len.fit$par[2] # vonB K
est.t0 <- len.fit$par[3] # vonB t0
Bootstrapping
# set up for bootstrap loop
tmp.frame <- data.frame()
temp.store <- vector()
# bootstrap to get 95% conf ints on growth coef K
for (j in 1:1000){
# choose indices at random, with replacement
indices <- sample(1:length(data06[,1]),replace=T)
# values from original data corresponding to those indices
new.len <- data06$len[indices]
new.doy <- data06$doy[indices]
tmp.frame <- data.frame(new.doy,new.len)
colnames(tmp.frame) <- c("doy","len")
init.par <- get.init(tmp.frame)
# now get the vonB params for the randomly selected samples
# using try() to keep optimizing errors from crashing the program
try( len.fit.bs <- optim(par=init.par, fn=vbl.ssq, method="BFGS", data=tmp.frame))
tmp.k <- len.fit.bs$par[2]
temp.store[j] <- tmp.k
}
95% confidence interval for K parameter
k.ci <- quantile(temp.store,c(0.025,0.975))
# 2.5% 97.5%
#0.004437702 0.019784178
Here's the problem:
#>summary(temp.store)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 0.002449 0.005777 0.010290 0.011700 0.016970 0.056720
#
# est.K [1] 0.01655956
Example of error:
Error in optim(par = init.par, fn = vbl.ssq, method = "BFGS", data = tmp.frame) :
non-finite finite-difference value [2]
I don't believe I am making any errors with the optimization because the VBGC fit looks reasonable. Here are the plots:
plot(x=data06$doy,y=data06$len,xlim=c(0,550),ylim=c(0,100))
legend(x="topleft",legend=paste("Length curve 2006"), bty="n")
curve(est.linf*(1-exp(-est.K*(x-est.t0))), add=T,type="l")
plot(x=2006,y=est.K, main="von B growth coefficient for length; 95% CIs",
ylim=c(0,0.025))
arrows(x0=2006,y0=k.ci[1],x1=2006,y1=k.ci[2], code=3,
angle=90,length=0.1)
First of all, you have a very small number of values, possibly too few to trust the bootstrap method. Then a high proportion of fits fails for the classic bootstrap, because due to the resampling you often have not enough distinct x values.
Here is an implementation using nls with a selfstarting model and the boot package.
doy <- c(156,205,228,276,319,380)
len <- c(36,56,60,68,68,71)
data06 <- data.frame(doy,len)
plot(len ~ doy, data = data06)
fit <- nls(len ~ SSasympOff(doy, Asym, lrc, c0), data = data06)
summary(fit)
#profiling CI
proCI <- confint(fit)
# 2.5% 97.5%
#Asym 68.290477 75.922174
#lrc -4.453895 -3.779994
#c0 94.777335 126.112523
curve(predict(fit, newdata = data.frame(doy = x)), add = TRUE)
#classic bootstrap
library(boot)
set.seed(42)
boot1 <- boot(data06, function(DF, i) {
tryCatch(coef(nls(len ~ SSasympOff(doy, Asym, lrc, c0), data = DF[i,])),
error = function(e) c(Asym = NA, lrc = NA, c0 = NA))
}, R = 1e3)
#proportion of unsuccessful fits
mean(is.na(boot1$t[, 1]))
#[1] 0.256
#bootstrap CI
boot1CI <- apply(boot1$t, 2, quantile, probs = c(0.025, 0.5, 0.975), na.rm = TRUE)
# [,1] [,2] [,3]
#2.5% 69.70360 -4.562608 67.60152
#50% 71.56527 -4.100148 113.9287
#97.5% 74.79921 -3.697461 151.03541
#bootstrap of the residuals
data06$res <- residuals(fit)
data06$fit <- fitted(fit)
set.seed(42)
boot2 <- boot(data06, function(DF, i) {
DF$lenboot <- DF$fit + DF[i, "res"]
tryCatch(coef(nls(lenboot ~ SSasympOff(doy, Asym, lrc, c0), data = DF)),
error = function(e) c(Asym = NA, lrc = NA, c0 = NA))
}, R = 1e3)
#proportion of unsuccessful fits
mean(is.na(boot2$t[, 1]))
#[1] 0
#(residuals) bootstrap CI
boot2CI <- apply(boot2$t, 2, quantile, probs = c(0.025, 0.5, 0.975), na.rm = TRUE)
# [,1] [,2] [,3]
#2.5% 70.19380 -4.255165 106.3136
#50% 71.56527 -4.100148 113.9287
#97.5% 73.37461 -3.969012 119.2380
proCI[2,1]
CIs_k <- data.frame(lwr = c(exp(proCI[2, 1]),
exp(boot1CI[1, 2]),
exp(boot2CI[1, 2])),
upr = c(exp(proCI[2, 2]),
exp(boot1CI[3, 2]),
exp(boot2CI[3, 2])),
med = c(NA,
exp(boot1CI[2, 2]),
exp(boot2CI[2, 2])),
estimate = exp(coef(fit)[2]),
method = c("profile", "boot", "boot res"))
library(ggplot2)
ggplot(CIs_k, aes(y = estimate, ymin = lwr, ymax = upr, x = method)) +
geom_errorbar() +
geom_point(aes(color = "estimate"), size = 5) +
geom_point(aes(y = med, color = "boot median"), size = 5) +
ylab("k") + xlab("") +
scale_color_brewer(name = "", type = "qual", palette = 2) +
theme_bw(base_size = 22)
As you see, the bootstrap CI is wider than the profile CI and bootstrapping the residuals results in a more narrow estimated CI. All of them are almost symmetric. Furthermore, the medians are close to the point estimates.
As a first step of investigating what goes wrong in your code, you should look at the proportion of failed fits from your procedure.

Resources