Running a regression model over 30 specific set.seed automatically using R - r

I am working with several linear regression models.
I want to run a linear regression model with different 30 set.seed
For clarification, I only share the code with two regression models and 10 set.seed (In my project I have 12 regression models and each one should be run with 30 different set.seeds)
I need a solution that I can run a 30 set.seed for one linear regression model so I can go away from my laptop during the running period (30 set.seeds). Then I did the same for the second regression model.
Is there a way to run the code over the 30 different set.seed automatically. So I got a result for each set.seed.
I hope everything is clear and I am happy to clarify more.
NOTE
Bear in mind that I have four related Blocks with each regression model. So any change with set.seed or creatFolds may affect the other blocks.
EDIT1
The dataset used
wdbc <- read.delim("airfoil_self_noise.dat",header=F)
wdbcc=as.data.frame(scale(wdbc))
#set.seed(1)
#set.seed(2)
#set.seed(3)
#set.seed(4)
...
k = 30
folds <- createFolds(wdbcc$V6, k = k, list = TRUE, returnTrain = TRUE)
## Ordinary Least Square regression ##
#Block A
lm = list()
for (i in 1:k) {
lm[[i]] = lm(V6~ ., data = wdbcc[folds[[i]],])
}
#Block B
lm_coef = list()
lm_coef_var = list()
for(j in 1:(lm[[1]]$coefficients %>% length())){
for(i in 1:k){
lm_coef[[i]] = lm[[i]]$coefficients[j]
lm_coef_var[[j]] = lm_coef %>% unlist() %>% var()
}
}
#Block C
lm_var = unlist(lm_coef_var)
lm_df = cbind(coefficients = lm[[1]]$coefficients%>% names() %>% as.data.frame()
, variance = lm_var %>% as.data.frame())
colnames(lm_df) = c("coefficients", "variance_lm")
#Block D
lm_var_sum = sum(lm_var)
PQSQ-Regression
X=list()
Y=list()
for (i in 1:k) {
n=wdbcc[folds[[i]],-6]
m=wdbcc[folds[[i]],6]
X[[i]]=n
Y[[i]]=m
}
#Block A
lmPQSQ1 = list()
for (i in 1:k) {
lmPQSQ1[[i]] = PQSQRegression(X[[i]],Y[[i]],0.01,data = wdbcc[folds[[i]],])
}
lmmPQSQ1=list()
for (i in 1:k) {
L=list(coefficients = c(lmPQSQ1[[i]][[2]],lmPQSQ1[[i]][[1]]))
lmmPQSQ1[[i]]=L
}
#Block B
lm_coefPQSQ1 = list()
lm_coef_varPQSQ1 = list()
for(j in 1:(lmmPQSQ1[[1]]$coefficients %>% length())){
for(i in 1:k){
lm_coefPQSQ1[[i]] = lmmPQSQ1[[i]]$coefficients[j]
lm_coef_varPQSQ1[[j]] = lm_coefPQSQ1 %>% unlist() %>% var()
}
}
#Block C
lm_varPQSQ1 = unlist(lm_coef_varPQSQ1)
lm_dfPQSQ1 = variance = lm_varPQSQ1 %>% as.data.frame()
#Block D
PQSQ1_var_sum = sum(lm_varPQSQ1)

If I understand you correctly you want to regress V6 on all the other variables using both OLS and a LAD model. You want to select k=30 random "folds" using createFolds and repeat the process also n=30 times. As result you want the variances for each repetition and each coefficient.
I would wrap the fitting part into a function FX. Generate n=30 seeds with sample, loop over it with an lapply to repeat FX n=30 times.
FX <- function(seed, data, k=30) {
set.seed(seed) ## sets seed for each iteration
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## folds
## OLS
lm1 <- lapply(folds, function(folds) lm(V6 ~ ., data=data[folds, ]))
lm.coefs <- t(sapply(lm1, coef)) ## lm coefficients
## LAD
lad1 <- lapply(folds, function(folds) lad(V6 ~ ., data=data[folds, ], method="BR"))
lad.coefs <- t(sapply(lad1, coef)) ## lad coefficients
## calculate column variances for both coef matrices
## use `attr<-` to add the seed as an attribute if you want
return(`attr<-`(cbind(lm=apply(lm.coefs, 2, var), lad=apply(lad.coefs, 2, var)),
"seed", seed))
}
seeds <- 1:30 ## specific seeds 1, 2, ... 30
## if you want non-consecutive specific seeds, do:
# set.seed(42) ## set some initial seed
# n <- 30 ## n. o. seeds
# seeds <- sample(1:1e6, n) ## sample seeds for `FX`
res <- lapply(seeds, FX, data=wdbcc) ## lapply loop
Result
This results in a list of length 30 with variance matrices for each repetition, each model, and each coefficient.
res[1:2] ## first two lists
# [[1]]
# lm lad
# (Intercept) 9.104280e-06 1.273920e-05
# V1 2.609623e-05 6.992753e-05
# V2 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.209651e-05
# V4 7.986000e-06 9.273005e-06
# V5 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
#
# [[2]]
# lm lad
# (Intercept) 4.558722e-06 2.031707e-05
# V1 2.256583e-05 9.291900e-05
# V2 6.519648e-05 2.768443e-05
# V3 1.800889e-05 9.983524e-06
# V4 1.131813e-05 1.174496e-05
# V5 3.866105e-05 1.022452e-05
# attr(,"seed")
# [1] 2
length(res)
# [1] 30
To calculate the sum of variances for each seed you may use colSums in an sapply.
# sum of variances
sov <- t(sapply(res, colSums))
dim(sov)
# [1] 30 2
head(sov)
# lm lad
# [1,] 1.829835e-04 0.0001401535
# [2,] 1.603091e-04 0.0001728735
# [3,] 1.003093e-04 0.0001972869
# [4,] 1.460591e-04 0.0001508251
# [5,] 9.915082e-05 0.0001262106
# [6,] 1.425996e-04 0.0001478449
To understand what one iteration of the lapply does, consider this:
## provide the values of first iteration for arguments of function `FX`
seed <- 1
data <- wdbcc
k <- 30
## first iteration of `lapply`
set.seed(seed)
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## folds
## OLS
lm1 <- lapply(folds, function(folds) lm(V6 ~ ., data=data[folds, ]))
lm.coefs <- t(sapply(lm1, coef)) ## lm coefficients
dim(lm.coefs)
# [1] 30 6
head(lm.coefs)
# (Intercept) V1 V2 V3 V4 V5
# Fold01 -0.0039130125 -0.5806272 -0.3564769 -0.4804492 0.2271908 -0.2805472
# Fold02 0.0013260444 -0.5863764 -0.3533327 -0.4759213 0.2253128 -0.2874691
# Fold03 0.0006791787 -0.5890787 -0.3678586 -0.4832066 0.2220979 -0.2739124
# Fold04 -0.0010721593 -0.5868079 -0.3722466 -0.4895328 0.2227811 -0.2749657
# Fold05 0.0021856620 -0.5850165 -0.3495360 -0.4810657 0.2235410 -0.2936287
# Fold06 0.0001486909 -0.5872607 -0.3677774 -0.4848523 0.2275780 -0.2823764
## LAD (same as OLS)
lad1 <- lapply(folds, function(folds) lad(V6 ~ ., data=data[folds, ], method="BR"))
lad.coefs <- t(sapply(lad1, coef)) ## lad coefficients
## return, throws variances for each coefficient of each model in a matrix
## the seed is added as an attribute, to be able to identify it later
res.1 <- `attr<-`(cbind(var.lm=apply(lm.coefs, 2, var),
var.lad=apply(lad.coefs, 2, var)),
"seed", seed)
res.1
# var.lm var.lad
# (Intercept) 9.104280e-06 1.273920e-05
# V1 2.609623e-05 6.992753e-05
# V2 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.209651e-05
# V4 7.986000e-06 9.273005e-06
# V5 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
Compare res.1 with the first element of list res above.
sov.1 <- colSums(res.1)
sov.1
# var.lm var.lad
# 0.0001829835 0.0001401535
Compare sov.1 with the first row of matrix sov above.
Edit
For regression functions with matrix notation, such as lm.fit, we may use model.matrix and do the subsetting beforehand, see line lm2.coefs in the function; compare lm and lm2 columns in res2 below, they're equal. (lm.fit is also faster than lm, because it omits unnecessary calculations, and you just need the coefficients; hence you may actually replace lm with lm.fit line. There might also be a way with lad using lsfit in the code, but honestly I'm too unfamiliar with lad to provide you this solution.)
Also notice, that, for sake of brevity I merged the two lines per model into one using sapply directly on the $coefficients. sapply works as lapply but throws a matrix; note that we need to transpose.
FX2 <- function(seed, data, k=30) {
set.seed(seed) ## sets seed for each iteration
folds <- createFolds(data[, "V6"], k=k, list=TRUE, returnTrain=TRUE) ## draw folds
lm.coefs <- t(sapply(folds, function(f) lm(V6 ~ ., data=data[f, ])$coef))
lm2.coefs <- t(sapply(folds, function(f) {
data2 <- data[f, ]
lm.fit(x=model.matrix(V6 ~ ., data2), y=data2[,"V6"])$coef
}))
lad.coefs <- t(sapply(folds, function(f) lad(V6 ~ ., data=data[f, ], method="BR")$coef))
return(`attr<-`(cbind(lm=apply(lm.coefs, 2, var),
lm2=apply(lm2.coefs, 2, var),
lad=apply(lad.coefs, 2, var)),
"seed", seed))
}
seeds <- 1:30
res.2 <- lapply(seeds, FX2, data=wdbcc) ## lapply loop
res.2[1:2]
# [[1]]
# lm lm2 lad
# (Intercept) 9.104280e-06 9.104280e-06 1.273920e-05
# V1 2.609623e-05 2.609623e-05 6.992753e-05
# V2 7.082099e-05 7.082099e-05 2.075875e-05
# V3 1.352299e-05 1.352299e-05 1.209651e-05
# V4 7.986000e-06 7.986000e-06 9.273005e-06
# V5 5.545298e-05 5.545298e-05 1.535849e-05
# attr(,"seed")
# [1] 1
#
# [[2]]
# lm lm2 lad
# (Intercept) 4.558722e-06 4.558722e-06 2.031707e-05
# V1 2.256583e-05 2.256583e-05 9.291900e-05
# V2 6.519648e-05 6.519648e-05 2.768443e-05
# V3 1.800889e-05 1.800889e-05 9.983524e-06
# V4 1.131813e-05 1.131813e-05 1.174496e-05
# V5 3.866105e-05 3.866105e-05 1.022452e-05
# attr(,"seed")
# [1] 2
Data and libraries:
invisible(lapply(c("caret", "L1pack"), library, character.only=TRUE))
wdbcc <- read.delim("airfoil_self_noise.dat", header=F)
wdbcc[] <- lapply(wdbcc, scale)

Related

How to bootstrap correlation using vectorised function applied to large matrix?

I understand how to boostrap using the "boot" package in R, through the PDF for the package and also from these two examples on Stack, Bootstrapped correlation with more than 2 variables in R and Bootstrapped p-value for a correlation coefficient on R.
However, this is for small datasets ( 2 variables or a matrix with 5 variables). I have a very large matrix (1000+ columns) and the code I use to compute the correlation between every metabolite pair (removing duplicate and correlations with the metabolite itself) is:
x <- colnames(dat)
GetCor = function(x,y) cor(dat[,x], dat[,y], method="spearman")
GetCor = Vectorize(GetCor)
out <- data.frame(t(combn(x,2)), stringsAsFactors = F) %>%
mutate(v = GetCor(X1,X2))
I'm not sure how I can then alter this for it to be the function I pass to statistic in boot so
boot_res<- boot(dat, ?, R=1000)
Or would I just need to obtain a matrix of the bootstrapped p value or estimate depending on function code (colMeans(boot_res$t)) and get rid of the upper or lower triangle?
Was curious to know the most efficient way of going about the problem..
Something like this? It follows more or less the same lines as my answer to the 2nd question you link to in your question.
Note that I have simplified the correlation code, cor accepts a data.frame or a matrix, so pass a two column one and keep one of the off diagonal correlation matrix elements.
library(boot)
bootPairwiseCor <- function(data, i) {
d <- data[i,]
combn(d, 2, \(x) cor(x, method="spearman")[1,2])
}
dat <- iris[-5]
nms <- combn(colnames(dat), 2, paste, collapse = "_")
R <- 100L
b <- boot(dat, bootPairwiseCor, R)
b
#>
#> ORDINARY NONPARAMETRIC BOOTSTRAP
#>
#>
#> Call:
#> boot(data = dat, statistic = bootPairwiseCor, R = R)
#>
#>
#> Bootstrap Statistics :
#> original bias std. error
#> t1* -0.1667777 0.0037142908 0.070552718
#> t2* 0.8818981 -0.0002851683 0.017783297
#> t3* 0.8342888 0.0006306610 0.021509280
#> t4* -0.3096351 0.0047809612 0.075976067
#> t5* -0.2890317 0.0045689001 0.069929108
#> t6* 0.9376668 -0.0014838117 0.009632318
data.frame(variables = nms, correlations = colMeans(b$t))
#> variables correlations
#> 1 Sepal.Length_Sepal.Width -0.1630634
#> 2 Sepal.Length_Petal.Length 0.8816130
#> 3 Sepal.Length_Petal.Width 0.8349194
#> 4 Sepal.Width_Petal.Length -0.3048541
#> 5 Sepal.Width_Petal.Width -0.2844628
#> 6 Petal.Length_Petal.Width 0.9361830
Created on 2023-01-28 with reprex v2.0.2
You may want to use cor.test to get theoretical t-values. We will use them for comparison with the B bootstrap t-values. (Recall: The p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.)
Here is a similar function to yours, but applying cor.test and extracting statistics.
corr_cmb <- \(X, boot=FALSE) {
stts <- c('estimate', 'statistic', 'p.value')
cmbn <- combn(colnames(X), 2, simplify=FALSE)
a <- lapply(cmbn, \(x) as.data.frame(cor.test(X[, x[1]], X[, x[2]])[stts])) |>
do.call(what=rbind) |>
`rownames<-`(sapply(cmbn, paste, collapse=':'))
if (boot) {
a <- a[, 'statistic']
}
a
}
We run it one times on the data to get a theoretical solution.
rhat <- corr_cmb(dat)
head(rhat, 3)
# estimate statistic p.value
# V1:V2 0.06780426 2.1469547 0.03203729
# V1:V3 0.03471587 1.0973752 0.27274212
# V1:V4 0.05301563 1.6771828 0.09381987
Bootstrap
We can assume from the start that the bootstrap with 1000 columns will run for a while (choose(1000, 2) returns 499500 combinations). That's why we think about a multithreaded solution right away.
To bootstrap we simple repeatedly apply corr_cmb repeatedly on a sample of the data with replications.
We will measure the time to estimate the time needed for 1000 variables.
## setup clusters
library(parallel)
CL <- makeCluster(detectCores() - 1)
clusterExport(CL, c('corr_cmb', 'dat'))
t0 <- Sys.time() ## timestamp before run
B <- 1099L
clusterSetRNGStream(CL, 42)
boot_res <- parSapply(CL, 1:B, \(i) corr_cmb(dat[sample.int(nrow(dat), replace=TRUE), ], boot=TRUE))
t1 <- Sys.time() ## timestamp after run
stopCluster(CL)
After the bootstrap, we calculate the ratios of how many times the absolute bootstrap test statistics exceeded the theoretical ones (Ref.),
boot_p <- rowMeans(abs(boot_res - rowMeans(boot_res)) > abs(rhat$statistic))
and cbind the bootstrap p-values to the theoretical result.
cbind(rhat, boot_p)
# estimate statistic p.value boot_p
# V1:V2 0.06780426 2.1469547 0.03203729 0.03003003
# V1:V3 0.03471587 1.0973752 0.27274212 0.28028028
# V1:V4 0.05301563 1.6771828 0.09381987 0.08208208
# V1:V5 -0.01018682 -0.3218300 0.74764890 0.73473473
# V2:V3 0.03730133 1.1792122 0.23859474 0.23323323
# V2:V4 0.07203911 2.2817257 0.02271539 0.01201201
# V2:V5 0.03098230 0.9792363 0.32770055 0.30530531
# V3:V4 0.02364486 0.7471768 0.45513283 0.47547548
# V3:V5 -0.02864165 -0.9051937 0.36558126 0.38938939
# V4:V5 0.03415689 1.0796851 0.28054328 0.29329329
Note that the data used is fairly normally distributed. If the data is not normally distributed, the bootstrap p-values will be more different.
To conclude, an estimate of the time needed for your 1000 variables.
d <- as.numeric(difftime(t1, t0, units='mins'))
n_est <- 1000
t_est <- d/(choose(m, 2))*choose(n_est, 2)
cat(sprintf('est. runtime for %s variables: %s mins\n', n_est, round(t_est, 1)))
# est. runtime for 1000 variables: 1485.8 mins
(Perhaps for sake of completeness, a single-threaded version for smaller problems:)
## singlethreaded version
# set.seed(42)
# B <- 1099L
# boot_res <- replicate(B, corr_cmb(dat[sample.int(nrow(dat), replace=TRUE), ], boot=TRUE))
Data:
library(MASS)
n <- 1e3; m <- 5
Sigma <- matrix(.5, m, m)
diag(Sigma) <- 1
set.seed(42)
M <- mvrnorm(n, runif(m), Sigma)
M <- M + rnorm(length(M), sd=6)
dat <- as.data.frame(M)

Preparing Test/Train sets for Cross Validaton in a loop

I am trying to build Test and Train groups for doing the Cross Validation. I have a total individuals pool of 95 invidual IDs and tried to make the task done like this:
# create 95 unique IDs as individuals
set.seed(1)
indv <- stringi::stri_rand_strings(95, 4)
# specify Kfold
n.folds <- 5
folds <- cut(1:length(indv), breaks = n.folds, labels = FALSE)
# randomise the folds
folds <- sample(folds, length(folds))
samples.train <- list()
samples.test <- list()
foldSet <- list()
kfold.df <- data.frame("IID" = indv)
for (f in 1:n.folds) {
samples.train[[f]] <- indv[folds != f]
samples.test[[f]] <- indv[folds == f]
# replace to x (test) if the corresponding value is TRUE, and to y (train) if it is FALSE.
foldSet[[f]] <- ifelse(kfold.df$IID %in%
samples.test[[f]], "test", "train")
# combine foldSet to datafarme.
kfold.df[[f]] <- cbind(kfold.df, foldSet[[f]])
}
The goal is preparing 5 testing and training sets of samples to do the modeling. But I have encountered with this error message:
Error in data.frame(..., check.names = FALSE) :
arguments imply differing number of rows: 95, 2
Besides, the foldSet output is not as expected, although samples.train and samples.test are correct. Could you please help me to make this loop working!
UPDATE:
Here is the for-loop without using wildcards in creating foldSet :
for (f in 1:n.folds) {
samples.train[[f]] <- indv[folds != f]
samples.test[[f]] <- indv[folds == f]
foldSet <<- ifelse(kfold.df$IID %in% samples.test[[f]], "test", "train")
# combine foldSet to datafarme.
kfold.df <<- cbind(kfold.df, foldSet)
}
By executing the loop you will find kfold.df as a dataframe listing all five folds test/train random sets. I expect for each iteration, creating the testing and training sets corresponding to the f, so, after five iteration, I would have access to each fold's Training/Testing sets for the next operations inside the loop, like kfold.df[foldSet == "train", "IID"]. I need this access bcoz I want to use it for subsetting another bigger matrix based on train and test invd of each fold, preparing it for applying to the regression model. That's why I used the wildcards for foldSet to make the loop able creating all by itself but I failed to manage it.
I think you may be overcomplicating things (which is something I do all the time...)
You don't need to go to great lengths to make what you are trying to make. This answer is broken down into three parts.
Building the data frame you're looking for (I think!)
Why you really don't need this data frame to be built
Why not use what's already out there?
Part 1
If I understand correctly, this is about what you're looking for (less the strings). I also included how you might use it with your actual data.
library(tidyverse)
giveMe <- function(rowCt, nfolds){
# set.seed(235) # removed seed after establishing working function to incite
# the expected randomness
folds <- cut(1:rowCt, breaks = nfolds, labels = F)
# randomise the folds
folds <- sample(folds, length(folds))
# create the folds' sets
kfold.df <- map_dfc(1:nfolds,
~ifelse(folds != .x, T, F)) %>%
setNames(., paste0("foldSet_",1:nfolds)) %>% # name each field
add_column(IID = 1:rowCt, .before = 1) # add indices to the left
return(kfold.df) # return a data frame
}
given <- giveMe(95, 5)
giveMore <- giveMe(nrow(iris), 5) # uses the built-in iris data set
Part 2
You could just create your random fold sequence and use that with a model, you don't need to stack them in a data frame. You have to loop through the model the same number of times, why not do it at the same time?
folds <- sample(cut(1:nrow(iris), 5, # no seed-- random on purpose
labels = F))
tellMe <- map(1:5, # the folds start in col 2
~lm(Sepal.Length~.,
iris[ifelse(folds != .x,
T, F),
1:4])) # dropped 'Species' groups' issue
To check out the model performance:
map_dfr(1:5, .f = function(x){
y = tellMe[[x]]
sigma = sigma(y)
rsq = summary(y)$adj.r.squared
c(sigma = sigma, rsq = rsq)
})
# # A tibble: 5 × 2
# sigma rsq
# <dbl> <dbl>
# 1 0.334 0.844
# 2 0.309 0.869
# 3 0.302 0.846
# 4 0.330 0.847
# 5 0.295 0.872
Predict and inspect the testing performance
# create a list of the predictec values from the test data
showMe <- map(1:5,
~predict(tellMe[[.x]],
iris[ifelse(folds == .x,
T, F), 1:4]))
# Grab comparable metrics like those from the models
map_dfr(1:5,
.f = function(x){
A = iris[ifelse(folds == x, T, F), ]$Sepal.Length
P = showMe[[x]]
sigma = sqrt(sum((A - P)^2) / length(A))
rsq = cor(A, P)^2
c(sigma = sigma, rsq = rsq)
})
# # A tibble: 5 × 2
# sigma rsq
# <dbl> <dbl>
# 1 0.232 0.919
# 2 0.342 0.774
# 3 0.366 0.884
# 4 0.250 0.906
# 5 0.384 0.790
Part 3
Here I'm going to use the caret library. However, there are a lot of other options.
library(caret)
set.seed(1)
# split training and testing 70/30%
tr <- createDataPartition(iris$Species, p = .7, list = F)
# set up 5-fold val
trC <- trainControl(method = "cv", number = 5)
# train the model
fit <- train(Sepal.Length~., iris[tr, ],
method = "lm",
trControl = trC)
summary(fit)
# truncated results best model:
# Residual standard error: 0.2754 on 39 degrees of freedom
# Multiple R-squared: 0.9062, Adjusted R-squared: 0.8941
fit.p <- predict(fit, iris[-tr,])
postResample(fit.p, iris[-tr, ]$Sepal.Length)
# RMSE Rsquared MAE
# 0.2795920 0.8925574 0.2302402
If you want to see each of the folds' performance, you can do that, too.
fit$resample
# RMSE Rsquared MAE Resample
# 1 0.3629901 0.7911634 0.2822708 Fold1
# 2 0.3680954 0.8888947 0.2960464 Fold2
# 3 0.3508317 0.8394489 0.2709989 Fold3
# 4 0.2548549 0.8954633 0.1960375 Fold4
# 5 0.3396910 0.8661239 0.3187768 Fold5

Function which outputs statistics for each variable combination

I want to write function combinations_features(y, x) which go through all combinations containing three variables and will output r squared, adjusted r squared, AIC and BIC for each combination.
My solution
combinations_features <- function(y, x) {
# Define empty vectors to store statistics
feature_vec_1 <- feature_vec_2 <-
feature_vec_3 <- feature_vec_4 <- c()
# Obtaining all combinations containing three variables
comb_names <- utils::combn(colnames(x), 3)
# For each combination obtain wanted statistics
for (i in 1:ncol(comb_names)) {
feature_vec_1 <- append(
feature_vec_1, summary(lm(y ~ ., data = x[, comb_names[, i]]))$adj.r.squared
)
feature_vec_2 <- append(
feature_vec_2, summary(lm(y ~ ., data = x[, comb_names[, i]]))$r.squared
)
feature_vec_3 <- append(
feature_vec_3, AIC(lm(y ~ ., data = x[, comb_names[, i]]))
)
feature_vec_4 <- append(
feature_vec_4, BIC(lm(y ~ ., data = x[, comb_names[, i]]))
)
}
# Assign everything into data frame
data.frame(
"Adj R2" = feature_vec_1, "R2" = feature_vec_2,
"AIC" = feature_vec_3, "BIC" = feature_vec_4
)
}
Let's see how it works - define some artificial data and give it to the function.
set.seed(42)
predictors <- data.frame(rnorm(100), runif(100), rexp(100), rpois(100, 1))
dependent <- rnorm(100)
> combinations_features(dependent, predictors)
Adj.R2 R2 AIC BIC
1 -0.0283756015 0.002787295 276.2726 289.2985
2 0.0000677269 0.030368705 273.4678 286.4937
3 -0.0011990695 0.029140296 273.5944 286.6203
4 0.0015404392 0.031796789 273.3204 286.3463
However I find this code very inefficient due to these two things:
(1) Loop - I looped it over columns of matrices comb_names, I wonder if it can be omitted somehow
(2) Length of the code - This code is huge! Due to the fact that I define feature_vec for each statistics and append to them separately. I wonder if assigning to them can be done somehow by one command.
Could you please give me hand with improving my code by telling if it's possible to apply (1) or (2) ?
How about this, which relies on bind_rows() from tidyverse? I don't think there's a way to avoid looping over the combinations, but lapply makes everything a little neater, IMHO.
combinations_features1 <- function(y, x) {
comb_names <- utils::combn(colnames(x), 3)
bind_rows(
lapply(
1:ncol(comb_names),
function(z) {
m <- lm(y ~ ., data = x[, comb_names[,z]])
s <- summary(m)
tibble(Adj.R2=s$adj.r.squared, R2=s$r.squared, AIC=AIC(m), BIC=BIC(m))
}
)
)
}
combinations_features1(dependent, predictors)
# A tibble: 4 x 4
Adj.R2 R2 AIC BIC
<dbl> <dbl> <dbl> <dbl>
1 -0.0284 0.00279 276. 289.
2 0.0000677 0.0304 273. 286.
3 -0.00120 0.0291 274. 287.
4 0.00154 0.0318 273. 286.
bind_rows(), if given a list, binds the elements of the list into a single data.frame.
Same idea as above, just directly applying lapply to the list of combinations would also work:
combinations_features <- function(y,x){
do.call(rbind, lapply(utils::combn(colnames(x), 3, simplify=FALSE),
function(i){
f1 <- lm(y ~ ., data=x[, i])
data.frame(Adj.R2=summary(f1)$adj.r.squared,
R2=summary(f1)$r.squared,
AIC=AIC(f1), BIC=BIC(f1))
}))
}

Rolling regression and prediction with lm() and predict()

I need to apply lm() to an enlarging subset of my dataframe dat, while making prediction for the next observation. For example, I am doing:
fit model predict
---------- -------
dat[1:3, ] dat[4, ]
dat[1:4, ] dat[5, ]
. .
. .
dat[-1, ] dat[nrow(dat), ]
I know what I should do for a particular subset (related to this question: predict() and newdata - How does this work?). For example to predict the last row, I do
dat1 = dat[1:(nrow(dat)-1), ]
dat2 = dat[nrow(dat), ]
fit = lm(log(clicks) ~ log(v1) + log(v12), data=dat1)
predict.fit = predict(fit, newdata=dat2, se.fit=TRUE)
How can I do this automatically for all subsets, and potentially extract what I want into a table?
From fit, I'd need the summary(fit)$adj.r.squared;
From predict.fit I'd need predict.fit$fit value.
Thanks.
(Efficient) solution
This is what you can do:
p <- 3 ## number of parameters in lm()
n <- nrow(dat) - 1
## a function to return what you desire for subset dat[1:x, ]
bundle <- function(x) {
fit <- lm(log(clicks) ~ log(v1) + log(v12), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ], se.fit = TRUE)
c(summary(fit)$adj.r.squared, pred$fit, pred$se.fit)
}
## rolling regression / prediction
result <- t(sapply(p:n, bundle))
colnames(result) <- c("adj.r2", "prediction", "se")
Note I have done several things inside the bundle function:
I have used subset argument for selecting a subset to fit
I have used model = FALSE to not save model frame hence we save workspace
Overall, there is no obvious loop, but sapply is used.
Fitting starts from p, the minimum number of data required to fit a model with p coefficients;
Fitting terminates at nrow(dat) - 1, as we at least need the final column for prediction.
Test
Example data (with 30 "observations")
dat <- data.frame(clicks = runif(30, 1, 100), v1 = runif(30, 1, 100),
v12 = runif(30, 1, 100))
Applying code above gives results (27 rows in total, truncated output for 5 rows)
adj.r2 prediction se
[1,] NaN 3.881068 NaN
[2,] 0.106592619 3.676821 0.7517040
[3,] 0.545993989 3.892931 0.2758347
[4,] 0.622612495 3.766101 0.1508270
[5,] 0.180462206 3.996344 0.2059014
The first column is the adjusted-R.squared value for fitted model, while the second column is the prediction. The first value for adj.r2 is NaN, because the first model we fit has 3 coefficients for 3 data points, hence no sensible statistics is available. The same happens to se as well, as the fitted line has no 0 residuals, so prediction is done without uncertainty.
I just made up some random data to use for this example. I'm calling the object data because that was what it was called in the question at the time that I wrote this solution (call it anything you like).
(Efficient) Solution
data <- data.frame(v1=rnorm(100),v2=rnorm(100),clicks=rnorm(100))
data1 = data[1:(nrow(data)-1), ]
data2 = data[nrow(data), ]
for(i in 3:nrow(data)){
nam <- paste("predict", i, sep = "")
nam1 <- paste("fit", i, sep = "")
nam2 <- paste("summary_fit", i, sep = "")
fit = lm(clicks ~ v1 + v2, data=data[1:i,])
tmp <- predict(fit, newdata=data2, se.fit=TRUE)
tmp1 <- fit
tmp2 <- summary(fit)
assign(nam, tmp)
assign(nam1, tmp1)
assign(nam2, tmp2)
}
All of the results you want will be stored in the data objects this creates.
For example:
> summary_fit10$r.squared
[1] 0.3087432
You mentioned in the comments that you'd like a table of results. You can programmatically create tables of results from the 3 types of output files like this:
rm(data,data1,data2,i,nam,nam1,nam2,fit,tmp,tmp1,tmp2)
frames <- ls()
frames.fit <- frames[1:98] #change index or use pattern matching as needed
frames.predict <- frames[99:196]
frames.sum <- frames[197:294]
fit.table <- data.frame(intercept=NA,v1=NA,v2=NA,sourcedf=NA)
for(i in 1:length(frames.fit)){
tmp <- get(frames.fit[i])
fit.table <- rbind(fit.table,c(tmp$coefficients[[1]],tmp$coefficients[[2]],tmp$coefficients[[3]],frames.fit[i]))
}
fit.table
> fit.table
intercept v1 v2 sourcedf
2 -0.0647017971121678 1.34929652763687 -0.300502017324518 fit10
3 -0.0401617893034109 -0.034750571912636 -0.0843076273486442 fit100
4 0.0132968863522573 1.31283604433593 -0.388846211083564 fit11
5 0.0315113918953643 1.31099122173898 -0.371130010135382 fit12
6 0.149582794027583 0.958692838785998 -0.299479715938493 fit13
7 0.00759688947362175 0.703525856001948 -0.297223988673322 fit14
8 0.219756240025917 0.631961979610744 -0.347851129205841 fit15
9 0.13389223748979 0.560583832333355 -0.276076134872669 fit16
10 0.147258022154645 0.581865844000838 -0.278212722024832 fit17
11 0.0592160359650468 0.469842498721747 -0.163187274356457 fit18
12 0.120640756525163 0.430051839741539 -0.201725012088506 fit19
13 0.101443924785995 0.34966728554219 -0.231560038360121 fit20
14 0.0416637001406594 0.472156988919337 -0.247684504074867 fit21
15 -0.0158319749710781 0.451944113682333 -0.171367482879835 fit22
16 -0.0337969739950376 0.423851304105399 -0.157905431162024 fit23
17 -0.109460218252207 0.32206642419212 -0.055331391802687 fit24
18 -0.100560410735971 0.335862465403716 -0.0609509815266072 fit25
19 -0.138175283219818 0.390418411384468 -0.0873106257144312 fit26
20 -0.106984355317733 0.391270279253722 -0.0560299858019556 fit27
21 -0.0740684978271464 0.385267011513678 -0.0548056844433894 fit28

R: Creating a loop for two models?

I currently have following code with two functions that calculate the model fit for two distinct models. The difference is in the lm function, where + log(v2) has been added in model 2.
R code
dat <- data.frame(clicks = runif(30, 1, 100), v1 = runif(30, 1, 100), v2 = runif(30, 1, 100))
p0 <- 1 # number of parameters in lm()
p1 <- 2 # number of parameters in lm()
n <- nrow(dat) - 1
## Model 1 Loop
model1 <- function(x) {
fit <- lm(log(clicks) ~ log(v1), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ])
c(summary(fit)$r.squared)
}
## Model 1 Regression
result_m1 <- t(sapply(p0:n, model1))
data.frame(result_m1)
## Model 2 Loop
model2 <- function(x) {
fit <- lm(log(clicks) ~ log(v1) + log(v2), data = dat, subset = 1:x, model = FALSE)
pred <- predict(fit, newdata = dat[x+1, ])
c(summary(fit)$r.squared)
}
## Model 2 Regression
result_m2 <- t(sapply(p1:n, model2))
data.frame(result_m2)
Question: Can I somehow create a function that implements a loop for the different models only, instead of repeating the calculation for every model?
I have something like this in mind but weren't able to implement it .http://www.ats.ucla.edu/stat/r/pages/looping_strings.htm
I don't see a point in recreating a function that can be easily done with model-selection functions in available packages.
library(leaps)
library(dplyr)
b <- regsubsets(clicks ~ ., data=dat, nbest=10, nvmax=2) # carries out exhaustive model selection (10 best models; 2 maximum predictors)
coef(b, 1:3) # returns coefficient for the 3 models in this case
[[1]]
(Intercept) v1
60.8067570 -0.2665699
[[2]]
(Intercept) v2
49.96974177 -0.05227489
[[3]]
(Intercept) v1 v2
62.02323816 -0.26422966 -0.02676747
summary(b)$rsq #provide r.squared value for 3 models
[1] 0.067952759 0.002366681 0.068568059
To run prediction is a tad more complicated.
all.mods <- summary(b)$which[,-1] # gives logic output of predictors combination
all.mods
v1 v2
1 TRUE FALSE
1 FALSE TRUE
2 TRUE TRUE
RHS <- lapply(seq(nrow(all.mods)), function(m) summary(b)$which[m,-1] %>% which %>% names %>% paste(., collapse="+"))
RHS
[[1]]
[1] "v1"
[[2]]
[1] "v2"
[[3]]
[1] "v1+v2"
lm.form <- lapply(RHS, function(m)parse(text=paste("lm(clicks ~", m, ", data=dat)")))
lm.mods <- lapply(lm.form, eval) # return list of all lm.mods generated
The list of lm.mods can subsequently be used for predict with new.data.

Resources