R Loop For Extracting Values - r

hsb2 <- read.csv("https://stats.idre.ucla.edu/stat/data/hsb2.csv")
names(hsb2)
varlist <- names(hsb2)[8:11]
models <- lapply(varlist, function(x) {
lm(substitute(read ~ i, list(i = as.name(x))), data = hsb2)
})
## look at the first element of the list, model 1
models[[1]]
The code above generates a series of simple regression models for different independent variables. My priority is to then extract the coefficient and standard error for each of the variables listed in varlist. My attempt shows below.
ATTEMPT = lapply(1:length(models), function(x) {
cbind(cov, coef(summary(models[[x]]))[2,1:2])})
My hopeful output will show three columns--variable, coefficient, std. error:

How about:
ATTEMPT2 = lapply(1:length(models), function(x) {
cf <- coef(summary(models[[x]]))
data.frame(Variable=rownames(cf)[2],
Estimate=cf[2,1],
Std.Error=cf[2,2])})
(df2 <- do.call("rbind", ATTEMPT2))
# Variable Estimate Std.Error
# 1 write 0.6455300 0.06168323
# 2 math 0.7248070 0.05827449
# 3 science 0.6525644 0.05714318
# 4 socst 0.5935322 0.05317162

Related

Function which outputs statistics for each variable combination

I want to write function combinations_features(y, x) which go through all combinations containing three variables and will output r squared, adjusted r squared, AIC and BIC for each combination.
My solution
combinations_features <- function(y, x) {
# Define empty vectors to store statistics
feature_vec_1 <- feature_vec_2 <-
feature_vec_3 <- feature_vec_4 <- c()
# Obtaining all combinations containing three variables
comb_names <- utils::combn(colnames(x), 3)
# For each combination obtain wanted statistics
for (i in 1:ncol(comb_names)) {
feature_vec_1 <- append(
feature_vec_1, summary(lm(y ~ ., data = x[, comb_names[, i]]))$adj.r.squared
)
feature_vec_2 <- append(
feature_vec_2, summary(lm(y ~ ., data = x[, comb_names[, i]]))$r.squared
)
feature_vec_3 <- append(
feature_vec_3, AIC(lm(y ~ ., data = x[, comb_names[, i]]))
)
feature_vec_4 <- append(
feature_vec_4, BIC(lm(y ~ ., data = x[, comb_names[, i]]))
)
}
# Assign everything into data frame
data.frame(
"Adj R2" = feature_vec_1, "R2" = feature_vec_2,
"AIC" = feature_vec_3, "BIC" = feature_vec_4
)
}
Let's see how it works - define some artificial data and give it to the function.
set.seed(42)
predictors <- data.frame(rnorm(100), runif(100), rexp(100), rpois(100, 1))
dependent <- rnorm(100)
> combinations_features(dependent, predictors)
Adj.R2 R2 AIC BIC
1 -0.0283756015 0.002787295 276.2726 289.2985
2 0.0000677269 0.030368705 273.4678 286.4937
3 -0.0011990695 0.029140296 273.5944 286.6203
4 0.0015404392 0.031796789 273.3204 286.3463
However I find this code very inefficient due to these two things:
(1) Loop - I looped it over columns of matrices comb_names, I wonder if it can be omitted somehow
(2) Length of the code - This code is huge! Due to the fact that I define feature_vec for each statistics and append to them separately. I wonder if assigning to them can be done somehow by one command.
Could you please give me hand with improving my code by telling if it's possible to apply (1) or (2) ?
How about this, which relies on bind_rows() from tidyverse? I don't think there's a way to avoid looping over the combinations, but lapply makes everything a little neater, IMHO.
combinations_features1 <- function(y, x) {
comb_names <- utils::combn(colnames(x), 3)
bind_rows(
lapply(
1:ncol(comb_names),
function(z) {
m <- lm(y ~ ., data = x[, comb_names[,z]])
s <- summary(m)
tibble(Adj.R2=s$adj.r.squared, R2=s$r.squared, AIC=AIC(m), BIC=BIC(m))
}
)
)
}
combinations_features1(dependent, predictors)
# A tibble: 4 x 4
Adj.R2 R2 AIC BIC
<dbl> <dbl> <dbl> <dbl>
1 -0.0284 0.00279 276. 289.
2 0.0000677 0.0304 273. 286.
3 -0.00120 0.0291 274. 287.
4 0.00154 0.0318 273. 286.
bind_rows(), if given a list, binds the elements of the list into a single data.frame.
Same idea as above, just directly applying lapply to the list of combinations would also work:
combinations_features <- function(y,x){
do.call(rbind, lapply(utils::combn(colnames(x), 3, simplify=FALSE),
function(i){
f1 <- lm(y ~ ., data=x[, i])
data.frame(Adj.R2=summary(f1)$adj.r.squared,
R2=summary(f1)$r.squared,
AIC=AIC(f1), BIC=BIC(f1))
}))
}

R glm generating different p-values for same categorical variables of different type

I am generating a model fit using glm. My data has a mix of integer variables and categorical variables. Categorical variables are in the form of codes and hence integer type in the data. Initially when I tried to generate the model I passed the categorical variables in integer format as it is and got the model. I was looking at the p-values to check the once that are significant and noticed few variables were significant which I was not expecting.
This is when realized that may be the categorical variables in integer form are creating some issue. So like code 3 might get a higher importance than code 1 (not sure on this and it would be great if someone can confirm this). On doing some research I found that we can convert the categorical integer variable to factor. I did the same and re-generated the model.
I also saw some posts where it was mentioned to convert to binary, so I did that we well. So now I have 3 results -
r1 >> with categorical integer variables
r2 >> with categorical factor variables
r3 >> with categorical variable converted to binary
I feel that output 1 with categorical integer variables is incorrect (Please confirm). But between output 2 and 3 I am confused which one to consider as
p-values are different,
which one would be more accurate
can I related the p-values of output 3 with output 2?
How does glm handle such variables
Hope glm inside a for loop is not an issue
My database is big, can we do glm using data.table?
I am pasting below my code with some sample data to be reproduced
library("plyr")
library("foreign")
library("data.table")
#####Generating sample data
set.seed(1200)
id <- 1:100
bill <- sample(1:3,100,replace = T)
nos <- sample(1:40,100,replace = T)
stru <- sample(1:4,100,replace = T)
type <- sample(1:7,100,replace = T)
value <- sample(100:1000,100,replace = T)
df1 <- data.frame(id,bill,nos,stru,type,value)
var1 <- c("bill","nos","stru")
options(scipen = 999)
r1 <- data.frame()
for(type1 in unique(df1$type)){
for(var in var1){
# dynamically generate formula
fmla <- as.formula(paste0("value ~ ", var))
# fit glm model
fit <- glm(fmla, data=df1[df1$type == type1,],family='quasipoisson')
p.value <- coef(summary(fit))[8]
cfit <- coef(summary(fit))
# create data frame
df2 <- data.frame(var = var, type = type1, basket="value",p.value = cfit[8],stringsAsFactors = F)
r1 <- rbind(r1, df2)
}
}
##### converting the categorical numeric variables to factor variables
df1$bill_f <- as.factor(bill)
df1$stru_f <- as.factor(stru)
var1 <- c("bill_f","nos","stru_f")
r2 <- data.frame()
for(type1 in unique(df1$type)){
for(var in var1){
# dynamically generate formula
fmla <- as.formula(paste0("value ~ ", var))
# fit glm model
fit <- glm(fmla, data=df1[df1$type == type1,],family='quasipoisson')
p.value <- coef(summary(fit))[8]
cfit <- coef(summary(fit))
# create data frame
df2 <- data.frame(var = var, type = type1, basket="value",p.value = cfit[8],stringsAsFactors = F)
r2 <- rbind(r2, df2)
}
}
#####converting the categorical numeric variables to binary format (1/0)
df1$bill_1 <- ifelse(df1$bill == 1,1,0)
df1$bill_2 <- ifelse(df1$bill == 2,1,0)
df1$bill_3 <- ifelse(df1$bill == 3,1,0)
df1$stru_1 <- ifelse(df1$stru == 1,1,0)
df1$stru_2 <- ifelse(df1$stru == 2,1,0)
df1$stru_3 <- ifelse(df1$stru == 3,1,0)
df1$stru_4 <- ifelse(df1$stru == 4,1,0)
var1 <- c("bill_1","bill_2","bill_3","nos","stru_1","stru_2","stru_3")
r3 <- data.frame()
for(type1 in unique(df1$type)){
for(var in var1){
# dynamically generate formula
fmla <- as.formula(paste0("value ~ ", var))
# fit glm model
fit <- glm(fmla, data=df1[df1$type == type1,],family='quasipoisson')
p.value <- coef(summary(fit))[8]
cfit <- coef(summary(fit))
# create data frame
df2 <- data.frame(var = var, type = type1, basket="value",p.value = cfit[8],stringsAsFactors = F)
r3 <- rbind(r3, df2)
}
}
Your feeling is mostly correct. For a GLM you should make the distinction between continious variables and discrete (categorical) variables.
Binary variables are variables which contain only 2 levels, for example 0 and 1.
Since you only have variables with 2+ levels, you should use the factor() function.

Multi-level regression model on multiply imputed data set in R (Amelia, zelig, lme4)

I am trying to run a multi-level model on multiply imputed data (created with Amelia); the sample is based on a clustered sample with group = 24, N= 150.
library("ZeligMultilevel")
ML.model.0 <- zelig(dv~1 + tag(1|group), model="ls.mixed",
data=a.out$imputations)
summary(ML.model.0)
This code produces the following error code:
Error in object[[1]]$result$call :
$ operator not defined for this S4 class
If I run a OLS regression, it works:
model.0 <- zelig(dv~1, model="ls", data=a.out$imputations)
m.0 <- coef(summary(model.0))
print(m.0, digits = 2)
Value Std. Error t-stat p-value
[1,] 45 0.34 130 2.6e-285
I am happy to provide a working example.
require(Zelig)
require(Amelia)
require(ZeligMultilevel)
data(freetrade)
length(freetrade$country) #grouping variable
#Imputation of missing data
a.out <- amelia(freetrade, m=5, ts="year", cs="country")
# Models: (1) OLS; (2) multi-level
model.0 <- zelig(polity~1, model="ls", data=a.out$imputations)
m.0 <- coef(summary(model.0))
print(m.0, digits = 2)
ML.model.0 <- zelig(polity~1 + tag(1|country), model="ls.mixed", data=a.out$imputations)
summary(ML.model.0)
I think the issue may be with how Zelig interfaces with Amelia's mi class. Therefore, I turned toward an alternative R package: lme4.
require(lme4)
write.amelia(obj=a.out, file.stem="inmi", format="csv", na="NA")
diff <-list(5) # a list to store each model, 5 is the number of the imputed datasets
for (i in 1:5) {
file.name <- paste("inmi", 5 ,".csv",sep="")
data.to.use <- read.csv(file.name)
diff[[5]] <- lmer(polity ~ 1 + (1 | country),
data = data.to.use)}
diff
The result is the following:
[[1]]
[1] 5
[[2]]
NULL
[[3]]
NULL
[[4]]
NULL
[[5]]
Linear mixed model fit by REML
Formula: polity ~ 1 + (1 | country)
Data: data.to.use
AIC BIC logLik deviance REMLdev
1006 1015 -499.9 1002 999.9
Random effects:
Groups Name Variance Std.Dev.
country (Intercept) 14.609 3.8222
Residual 17.839 4.2236
Number of obs: 171, groups: country, 9
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.878 1.314 2.19
The results remain the same when I replace diff[[5]] by diff[[4]], diff[[3]] etc. Still, I am wondering whether this is actually the results for the combined dataset or for one single imputed data set. Any thoughts? Thanks!
I modified the summary function for this object (fetched the source and opened up ./R/summary.R file). I added some curly braces to make the code flow and changed a getcoef to coef. This should work for this particular case, but I'm not sure if it's general. Function getcoef searches for slot coef3, and I have never seen this. Perhaps #BenBolker can throw an eye here? I can't guarantee this is what the result looks like, but the output looks legit to me. Perhaps you could contact the package authors to correct this in the future version.
summary(ML.model.0)
Model: ls.mixed
Number of multiply imputed data sets: 5
Combined results:
Call:
zelig(formula = polity ~ 1 + tag(1 | country), model = "ls.mixed",
data = a.out$imputations)
Coefficients:
Value Std. Error t-stat p-value
[1,] 2.902863 1.311427 2.213515 0.02686218
For combined results from datasets i to j, use summary(x, subset = i:j).
For separate results, use print(summary(x), subset = i:j).
Modified function:
summary.MI <- function (object, subset = NULL, ...) {
if (length(object) == 0) {
stop('Invalid input for "subset"')
} else {
if (length(object) == 1) {
return(summary(object[[1]]))
}
}
# Roman: This function isn't fecthing coefficients robustly. Something goes wrong. Contact package author.
getcoef <- function(obj) {
# S4
if (!isS4(obj)) {
coef(obj)
} else {
if ("coef3" %in% slotNames(obj)) {
obj#coef3
} else {
obj#coef
}
}
}
#
res <- list()
# Get indices
subset <- if (is.null(subset)) {
1:length(object)
} else {
c(subset)
}
# Compute the summary of all objects
for (k in subset) {
res[[k]] <- summary(object[[k]])
}
# Answer
ans <- list(
zelig = object[[1]]$name,
call = object[[1]]$result#call,
all = res
)
#
coef1 <- se1 <- NULL
#
for (k in subset) {
# tmp <- getcoef(res[[k]]) # Roman: I changed this to coef, not 100% sure if the output is the same
tmp <- coef(res[[k]])
coef1 <- cbind(coef1, tmp[, 1])
se1 <- cbind(se1, tmp[, 2])
}
rows <- nrow(coef1)
Q <- apply(coef1, 1, mean)
U <- apply(se1^2, 1, mean)
B <- apply((coef1-Q)^2, 1, sum)/(length(subset)-1)
var <- U+(1+1/length(subset))*B
nu <- (length(subset)-1)*(1+U/((1+1/length(subset))*B))^2
coef.table <- matrix(NA, nrow = rows, ncol = 4)
dimnames(coef.table) <- list(rownames(coef1),
c("Value", "Std. Error", "t-stat", "p-value"))
coef.table[,1] <- Q
coef.table[,2] <- sqrt(var)
coef.table[,3] <- Q/sqrt(var)
coef.table[,4] <- pt(abs(Q/sqrt(var)), df=nu, lower.tail=F)*2
ans$coefficients <- coef.table
ans$cov.scaled <- ans$cov.unscaled <- NULL
for (i in 1:length(ans)) {
if (is.numeric(ans[[i]]) && !names(ans)[i] %in% c("coefficients")) {
tmp <- NULL
for (j in subset) {
r <- res[[j]]
tmp <- cbind(tmp, r[[pmatch(names(ans)[i], names(res[[j]]))]])
}
ans[[i]] <- apply(tmp, 1, mean)
}
}
class(ans) <- "summaryMI"
ans
}

Rolling regression return multiple objects

I am trying to build a rolling regression function based on the example here, but in addition to returning the predicted values, I would like to return the some rolling model diagnostics (i.e. coefficients, t-values, and mabye R^2). I would like the results to be returned in discrete objects based on the type of results. The example provided in the link above sucessfully creates thr rolling predictions, but I need some assistance packaging and writing out the rolling model diagnostics:
In the end, I would like the function to return three (3) objects:
Predictions
Coefficients
T values
R^2
Below is the code:
require(zoo)
require(dynlm)
## Create Some Dummy Data
set.seed(12345)
x <- rnorm(mean=3,sd=2,100)
y <- rep(NA,100)
y[1] <- x[1]
for(i in 2:100) y[i]=1+x[i-1]+0.5*y[i-1]+rnorm(1,0,0.5)
int <- 1:100
dummydata <- data.frame(int=int,x=x,y=y)
zoodata <- as.zoo(dummydata)
rolling.regression <- function(series) {
mod <- dynlm(formula = y ~ L(y) + L(x), data = as.zoo(series)) # get model
nextOb <- max(series[,'int'])+1 # To get the first row that follows the window
if (nextOb<=nrow(zoodata)) { # You won't predict the last one
# 1) Make Predictions
predicted <- predict(mod,newdata=data.frame(x=zoodata[nextOb,'x'],y=zoodata[nextOb,'y']))
attributes(predicted) <- NULL
c(predicted=predicted,square.res <-(predicted-zoodata[nextOb,'y'])^2)
# 2) Extract coefficients
#coefficients <- coef(mod)
# 3) Extract rolling coefficient t values
#tvalues <- ????(mod)
# 4) Extract rolling R^2
#rsq <-
}
}
rolling.window <- 20
results.z <- rollapply(zoodata, width=rolling.window, FUN=rolling.regression, by.column=F, align='right')
So after figuring out how to extract t values from model (i.e. mod) , what do I need to do to make the function return three (3) seperate objects (i.e. Predictions, Coefficients, and T-values)?
I am fairly new to R, really new to functions, and extreemly new to zoo, and I'm stuck.
Any assistance would be greatly appreciated.
I hope I got you correctly, but here is a small edit of your function:
rolling.regression <- function(series) {
mod <- dynlm(formula = y ~ L(y) + L(x), data = as.zoo(series)) # get model
nextOb <- max(series[,'int'])+1 # To get the first row that follows the window
if (nextOb<=nrow(zoodata)) { # You won't predict the last one
# 1) Make Predictions
predicted=predict(mod,newdata=data.frame(x=zoodata[nextOb,'x'],y=zoodata[nextOb,'y']))
attributes(predicted)<-NULL
#Solution 1; Quicker to write
# c(predicted=predicted,
# square.res=(predicted-zoodata[nextOb,'y'])^2,
# summary(mod)$coef[, 1],
# summary(mod)$coef[, 3],
# AdjR = summary(mod)$adj.r.squared)
#Solution 2; Get column names right
c(predicted=predicted,
square.res=(predicted-zoodata[nextOb,'y'])^2,
coef_intercept = summary(mod)$coef[1, 1],
coef_Ly = summary(mod)$coef[2, 1],
coef_Lx = summary(mod)$coef[3, 1],
tValue_intercept = summary(mod)$coef[1, 3],
tValue_Ly = summary(mod)$coef[2, 3],
tValue_Lx = summary(mod)$coef[3, 3],
AdjR = summary(mod)$adj.r.squared)
}
}
rolling.window <- 20
results.z <- rollapply(zoodata, width=rolling.window, FUN=rolling.regression, by.column=F, align='right')
head(results.z)
predicted square.res coef_intercept coef_Ly coef_Lx tValue_intercept tValue_Ly tValue_Lx AdjR
20 10.849344 0.721452 0.26596465 0.5798046 1.049594 0.38309211 7.977627 13.59831 0.9140886
21 12.978791 2.713053 0.26262820 0.5796883 1.039882 0.37741499 7.993014 13.80632 0.9190757
22 9.814676 11.719999 0.08050796 0.5964808 1.073941 0.12523824 8.888657 15.01353 0.9340732
23 5.616781 15.013297 0.05084124 0.5984748 1.077133 0.08964998 9.881614 16.48967 0.9509550
24 3.763645 6.976454 0.26466039 0.5788949 1.068493 0.51810115 11.558724 17.22875 0.9542983
25 9.433157 31.772658 0.38577698 0.5812665 1.034862 0.70969330 10.728395 16.88175 0.9511061
To see how it works, make a small example with a regression:
x <- rnorm(1000); y <- 2*x + rnorm(1000)
reg <- lm(y ~ x)
summary(reg)$coef
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.02694322 0.03035502 0.8876033 0.374968
x 1.97572544 0.03177346 62.1816310 0.000000
As you can see, calling summary first and then getting the coefficients of it (coef(summary(reg)) works as well) gives you a table with estimates, standard errors, and t-values. So estimates are saved in column 1 of that table, t-values in column 3. And that's how I obtain them in the updated rolling.regression function.
EDIT
I updated my solution; now it also contains the adjusted R2. If you just want the normal R2, get rid of the .adj.
EDIT 2
Quick and dirty hack how to name the columns:
rolling.regression <- function(series) {
mod <- dynlm(formula = y ~ L(y) + L(x), data = as.zoo(series)) # get model
nextOb <- max(series[,'int'])+1 # To get the first row that follows the window
if (nextOb<=nrow(zoodata)) { # You won't predict the last one
# 1) Make Predictions
predicted=predict(mod,newdata=data.frame(x=zoodata[nextOb,'x'],y=zoodata[nextOb,'y']))
attributes(predicted)<-NULL
#Get variable names
strVar <- c("Intercept", paste0("L", 1:(nrow(summary(mod)$coef)-1)))
vec <- c(predicted=predicted,
square.res=(predicted-zoodata[nextOb,'y'])^2,
AdjR = summary(mod)$adj.r.squared,
summary(mod)$coef[, 1],
summary(mod)$coef[, 3])
names(vec)[4:length(vec)] <- c(paste0("Coef_", strVar), paste0("tValue_", strVar))
vec
}
}

Block bootstrap from subject list

I'm trying to efficiently implement a block bootstrap technique to get the distribution of regression coefficients. The main outline is as follows.
I have a panel data set, and say firm and year are the indices. For each iteration of the bootstrap, I wish to sample n subjects with replacement. From this sample, I need to construct a new data frame that is an rbind() stack of all the observations for each sampled subject, run the regression, and pull out the coefficients. Repeat for a bunch of iterations, say 100.
Each firm can potentially be selected multiple times, so I need to include it data multiple times in each iteration's data set.
Using a loop and subset approach, like below, seems computationally burdensome.
Note that for my real data frame, n, and the number iterations is much larger than the example below.
My thoughts initially are to break the existing data frame into a list by subject using the split() command. From there, use
sample(unique(df1$subject),n,replace=TRUE)
to get the new list, then perhaps implement quickdf from the plyr package to construct a new data frame.
Example slow code:
require(plm)
data("Grunfeld", package="plm")
firms = unique(Grunfeld$firm)
n = 10
iterations = 100
mybootresults=list()
for(j in 1:iterations){
v = sample(length(firms),n,replace=TRUE)
newdata = NULL
for(i in 1:n){
newdata = rbind(newdata,subset(Grunfeld, firm == v[i]))
}
reg1 = lm(value ~ inv + capital, data = newdata)
mybootresults[[j]] = coefficients(reg1)
}
mybootresults = as.data.frame(t(matrix(unlist(mybootresults),ncol=iterations)))
names(mybootresults) = names(reg1$coefficients)
mybootresults
(Intercept) inv capital
1 373.8591 6.981309 -0.9801547
2 370.6743 6.633642 -1.4526338
3 528.8436 6.960226 -1.1597901
4 331.6979 6.239426 -1.0349230
5 507.7339 8.924227 -2.8661479
...
...
How about something like this:
myfit <- function(x, i) {
mydata <- do.call("rbind", lapply(i, function(n) subset(Grunfeld, firm==x[n])))
coefficients(lm(value ~ inv + capital, data = mydata))
}
firms <- unique(Grunfeld$firm)
b0 <- boot(firms, myfit, 999)
You can also use the tsboot function in the boot package with fixed block resampling scheme.
require(plm)
require(boot)
data(Grunfeld)
### each firm is of length 20
table(Grunfeld$firm)
## 1 2 3 4 5 6 7 8 9 10
## 20 20 20 20 20 20 20 20 20 20
blockboot <- function(data)
{
coefficients(lm(value ~ inv + capital, data = data))
}
### fixed length (every 20 obs, so for each different firm) block bootstrap
set.seed(321)
boot.1 <- tsboot(Grunfeld, blockboot, R = 99, l = 20, sim = "fixed")
boot.1
## Bootstrap Statistics :
## original bias std. error
## t1* 410.81557 -25.785972 174.3766
## t2* 5.75981 0.451810 2.0261
## t3* -0.61527 0.065322 0.6330
dim(boot.1$t)
## [1] 99 3
head(boot.1$t)
## [,1] [,2] [,3]
## [1,] 522.11 7.2342 -1.453204
## [2,] 626.88 4.6283 0.031324
## [3,] 479.74 3.2531 0.637298
## [4,] 557.79 4.5284 0.161462
## [5,] 568.72 5.4613 -0.875126
## [6,] 379.04 7.0707 -1.092860
Here is a method that should typically be faster than the accepted answer, returns the same results and does not rely on additional packages (except boot). The key here is to use which and integer indexing to construct each data.frame replicate rather than split/subset and do.call/rbind.
# get function for boot
myIndex <- function(x, i) {
# select the observations to subset. Likely repeated observations
blockObs <- unlist(lapply(i, function(n) which(x[n] == Grunfeld$firm)))
# run regression for given replicate, return estimated coefficients
coefficients(lm(value~ inv + capital, data=Grunfeld[blockObs,]))
}
now, bootstrap
# get result
library(boot)
set.seed(1234)
b1 <- boot(firms, myIndex, 200)
Run the accepted answer
set.seed(1234)
b0 <- boot(firms, myfit, 200)
Let's eyeball a comparison
using indexing
b1
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = firms, statistic = myIndex, R = 200)
Bootstrap Statistics :
original bias std. error
t1* 410.8155650 -6.64885086 197.3147581
t2* 5.7598070 0.37922066 2.4966872
t3* -0.6152727 -0.04468225 0.8351341
Original version
b0
ORDINARY NONPARAMETRIC BOOTSTRAP
Call:
boot(data = firms, statistic = myfit, R = 200)
Bootstrap Statistics :
original bias std. error
t1* 410.8155650 -6.64885086 197.3147581
t2* 5.7598070 0.37922066 2.4966872
t3* -0.6152727 -0.04468225 0.8351341
These look pretty close. Now, a bit more checking
identical(b0$t, b1$t)
[1] TRUE
and
identical(summary(b0), summary(b1))
[1] TRUE
Finally, we'll do a quick benchmark
library(microbenchmark)
microbenchmark(index={b1 <- boot(firms, myIndex, 200)},
rbind={b0 <- boot(firms, myfit, 200)})
On my computer, this returns
Unit: milliseconds
expr min lq mean median uq max neval
index 292.5770 296.3426 303.5444 298.4836 301.1119 395.1866 100
rbind 712.1616 720.0428 729.6644 724.0777 731.0697 833.5759 100
So, direct indexing is more than 2 times faster at every level of the distribution.
note on missing fixed effects
As with most of the answers, the issue of missing "fixed effects" may emerge. Commonly, fixed effects are used as controls and the researcher is interested in one or a couple of variables that will be included with every selected observation. In this dominant case, there is no (or very little) harm in restricting the returned result of the myIndex or myfit function to only include the variables of interest in the returned vector.
The solution needs to be modified to manage fixed effects.
library(boot) # for boot
library(plm) # for Grunfeld
library(dplyr) # for left_join
## Get the Grunfeld firm data (10 firms, each for 20 years, 1935-1954)
data("Grunfeld", package="plm")
## Create dataframe with unique firm identifier (one line per firm)
firms <- data.frame(firm=unique(Grunfeld$firm),junk=1)
## for boot(), X is the firms dataframe; i index the sampled firms
myfit <- function(X, i) {
## join the sampled firms to their firm-year data
mydata <- left_join(X[i,], Grunfeld, by="firm")
## Distinguish between multiple resamples of the same firm
## Otherwise they have the same id in the fixed effects regression
## And trouble ensues
mydata <- mutate(group_by(mydata,firm,year),
firm_uniq4boot = paste(firm,"+",row_number())
)
## Run regression with and without firm fixed effects
c(coefficients(lm(value ~ inv + capital, data = mydata)),
coefficients(lm(value ~ inv + capital + factor(firm_uniq4boot), data = mydata)))
}
set.seed(1)
system.time(b <- boot(firms, myfit, 1000))
summary(b)
summary(lm(value ~ inv + capital, data=Grunfeld))
summary(lm(value ~ inv + capital + factor(firm), data=Grunfeld))
I found a method using dplyr::left_join that is a bit more concise, only takes about 60% as long, and gives the same results as in the answer by Sean. Here's a complete self-contained example.
library(boot) # for boot
library(plm) # for Grunfeld
library(dplyr) # for left_join
# First get the data
data("Grunfeld", package="plm")
firms <- unique(Grunfeld$firm)
myfit1 <- function(x, i) {
# x is the vector of firms
# i are the indexes into x
mydata <- do.call("rbind", lapply(i, function(n) subset(Grunfeld, firm==x[n])))
coefficients(lm(value ~ inv + capital, data = mydata))
}
myfit2 <- function(x, i) {
# x is the vector of firms
# i are the indexes into x
mydata <- left_join(data.frame(firm=x[i]), Grunfeld, by="firm")
coefficients(lm(value ~ inv + capital, data = mydata))
}
# rbind method
set.seed(1)
system.time(b1 <- boot(firms, myfit1, 5000))
## user system elapsed
## 13.51 0.01 13.62
# left_join method
set.seed(1)
system.time(b2 <- boot(firms, myfit2, 5000))
## user system elapsed
## 8.16 0.02 8.26
b1
## original bias std. error
## t1* 410.8155650 9.2896499 198.6877889
## t2* 5.7598070 0.5748503 2.5725441
## t3* -0.6152727 -0.1200954 0.7829191
b2
## original bias std. error
## t1* 410.8155650 9.2896499 198.6877889
## t2* 5.7598070 0.5748503 2.5725441
## t3* -0.6152727 -0.1200954 0.7829191

Resources