I am new to R and have difficulties using "if" and "for-loop". sorry if it is duplicated.
as you can see a chuck of a code below, I try to create 100 lm models and save when the R is more than 0.7.
However, the code saved all 100 lm models.
I suspect the statement (!is.na(lm.cv.r[i]) < 0.60) is wrong but I cannot figure it out.
# lets use USArrests data as an example
data("USArrests")
head(USArrests)
df.norm <- USArrests
set.seed(100)
lm.cv.mse <- NULL
lm.cv.r <- NULL
k <- 100
for(i in 1:k){
index.cv <- sample(1:nrow(df.norm),round(0.8*nrow(df.norm)))
df.cv.train <- df.norm[index.cv, ]
df.cv.test <- df.norm[-index.cv, ]
lm.cv <- glm(Rape~., data = df.cv.train)
lm.cv.predicted <- predict(lm.cv, df.cv.test)
lm.cv.mse[i] <- sum((df.cv.test$target - lm.cv.predicted)^2)/nrow(df.cv.test)
lm.cv.r[i] <- as.numeric(round(cor(lm.cv.predicted, df.cv.test$target, method = "pearson"), digits = 3))
if (!is.na(lm.cv.r[i]) > 0.70){
saveRDS(lm.cv, file = paste("lm.cv", lm.cv.r[i], ".rds", sep = ''))
}
}
I'm not familiarized with lm, so I will assume your code is working and the problem is as you said the if statement.
Try this out:
if ((lm.cv.r[i]>0.7) & (is.na(lm.cv.r[i])==FALSE)){
saveRDS(lm.cv, file = paste("lm.cv", lm.cv.r[i], ".rds", sep = ''))
}
So in your code
(!is.na(lm.cv.r[i]) > 0.70)
take a look at the !is.na(lm.cv.r[i]). Assuming that lm.cv.r[i] is a value or a set of values, then applying !is.na will return a value of TRUE since lm.cv.r[i] is not a na value. So you are dealing with this condition: " if TRUE > 0.7 ", which in fact returns TRUEsince 0.7 is less than 1.
In conclusion, you are saving every element since every if is TRUE.
I need to open files (matrices) from a directory and apply a function pca on each one. It uses another function count_pc which is thought to null diagonals in the matrix step by step and add recalculated PC1 to a the table pcs from the previous function. At the start, I didn't think of environments so count_pca was crashing with the error "unknown variable". Then I tried to do it this way:
files <- list.files()
count_pc <- function(x, env = parent.frame()) {
diag(file[x:nrow(file),]) <- 0
diag(file[,x:nrow(file)]) <- 0
pcn <- prcomp(file, scale = FALSE)
pcn <- data.frame(pcn$rotation)
pcs <- cbind(pcs, pcn$PC1)
}
pca <- function(filename) {
file <- as.matrix(read.table(filename))
pc <- prcomp(file, scale = FALSE)
pc <- data.frame(pc$rotation)
pc1 <- pc$PC1
pcs <- data.frame(pc1)
for (k in 1:40) {
count_pc(k)
}
new_filename <- strsplit(filename, "_")[[1]][3]
print(pcs)
colnames(pcs) <- paste0(0:40, rep("_bins_deleted", 40))
write.table(pcs, file=paste(new_filename, "eigenvectors", sep="_"))
return(apply(pcs, 2, cor, y = pc1))
}
ldply(files, pca)
And indeed, count_pc does not crash with above error but, unfortunately, it crashes with the new one:
"colnames<-`(`*tmp*`, value = c("0_bins_deleted", "1_bins_deleted", :
'names' [41] attribute must be the same length as the vector [1]"
which means that count_pc does not change needed variables. First, I thought the problem might be connected with using sapply(1:40, count_pc) so I replaced it with a cycle. But it didn't help. I've also tried to use environment(count_pc) <- environment() in the pca but it didn't help either (as well as changing variable names in count_pc to env$'name'). I don't know what to do and googling doesn't seem to help.
Not a reproducible example, but here is my problem. I have a data frame and am looping through a series of columns and performing forecasting. I've created a fuction that will handle the forecasting part as it's repetitive and place it within the for loop. So each iteration of the loop, the forecast_func loop will be used to perform the forecast and accuracy assessment, and rbind those results to the results data frame.
result = data.frame()
forecast_func = function(data){
data.train = rnorm(100)
data.test = rnorm(10)
library(forecast)
mod = auto.arima(data.train)
mod_forc = forecast(mod)$mean
mod_acc = accuracy(mod_forc, data.test)
result = rbind(result, data.frame(mod_forc))
}
result
for(i in ...){
...
forecast_func(data)
...
}
If I were doing this once, I know how to append the results to result. However, within a for loop, every time I use the function, I get an empty data frame. Obviously, it's not returning to the global environment. However, I was wondering what were some useful strategies for attaching this problem.
I believe you can use the parent assignment operator here (<<-):
forecast_func = function(data){
data.train = rnorm(100)
data.test = rnorm(10)
library(forecast)
mod = auto.arima(data.train)
mod_forc = forecast(mod)$mean
mod_acc = accuracy(mod_forc, data.test)
# assign to 'result' data frame in the parent (calling) scope
result <<- rbind(result, data.frame(mod_forc))
}
result = data.frame()
for (i in ...) {
...
forecast_func(data)
...
}
This being said, using the parent assignment operator is usually frowned upon, based on what I have seen here on SO and elsewhere.
One way to avoid having use <<- would be to refactor your forecast_func to return a data frame, instead of trying to persist the result internally:
forecast_func = function(data){
data.train = rnorm(100)
data.test = rnorm(10)
library(forecast)
mod = auto.arima(data.train)
mod_forc = forecast(mod)$mean
mod_acc = accuracy(mod_forc, data.test)
return(data.frame(mod_forc))
}
result = data.frame()
for (i in ...) {
...
# just rbind() here in the calling scope
result = rbind(result, forecast_func(data))
...
}
I would like to do bootstrap of residuals for nls fits in a loop. I use nlsBoot and in order to decrease computation time I would like to do that in parallel (on a Windows 7 system at the moment). Here is some code, which reproduces my problem:
#function for fitting
Falge2000 <- function(GP2000,alpha,PAR) {
(GP2000*alpha*PAR)/(GP2000+alpha*PAR-GP2000/2000*PAR)
}
#some data
PAR <- 10:1600
GPP <- Falge2000(-450,-0.73,PAR) + rnorm(length(PAR),sd=0.0001)
df1 <- data.frame(PAR,GPP)
#nls fit
mod <- nls(GPP~Falge2000(GP2000,alpha,PAR),start=list(GP2000=-450,alpha=-0.73),data=df1, upper=c(0,0),algorithm="port")
#bootstrap of residuals
library(nlstools)
summary(nlsBoot(mod,niter=5))
#works
#now do it several times
#and in parallel
library(foreach)
library(doParallel)
cl <- makeCluster(1)
registerDoParallel(cl)
ttt <- foreach(1:5, .packages='nlstools',.export="df1") %dopar% {
res <- nlsBoot(mod,niter=5)
summary(res)
}
#Error in { :
#task 1 failed - "Procedure aborted: the fit only converged in 1 % during bootstrapping"
stopCluster(cl)
I suspect this an issue with environments and after looking at the code of nlsBoot the problem seems to arise from the use of an anonymous function in a lapply call:
l1 <- lapply(1:niter, function(i) {
data2[, var1] <- fitted1 + sample(scale(resid1, scale = FALSE),
replace = TRUE)
nls2 <- try(update(nls, start = as.list(coef(nls)), data = data2),
silent = TRUE)
if (inherits(nls2, "nls"))
return(list(coef = coef(nls2), rse = summary(nls2)$sigma))
})
if (sum(sapply(l1, is.null)) > niter/2)
stop(paste("Procedure aborted: the fit only converged in",
round(sum(sapply(l1, is.null))/niter), "% during bootstrapping"))
Is there a way to use nlsBoot in a parallel loop? Or do I need to modify the function? (I could try to use a for loop instead of lapply.)
By moving the creation of the mod object into the %dopar% loop, it looks like everything works OK. Also, this automatically exports the df1 object, so you can remove the .export argument.
ttt <- foreach(1:5, .packages='nlstools') %dopar% {
mod <- nls(GPP~Falge2000(GP2000,alpha,PAR),start=list(GP2000=-450,alpha=-0.73),data=df1, upper=c(0,0),algorithm="port")
res <- nlsBoot(mod,niter=5)
capture.output(summary(res))
}
However, you might need to work out what you want returned. Using capture.output was just to see if things were working, since summary(res) seemed to only return NULL.
I am trying to apply a function I wrote that uses the 'pls' package to make a model and then use it
to predict several test set(in this case 9), returning the R2,RMSEP and prediction bias of each test set
for n number of subset selected from the data frame.
the function is
cpo<-function(data,newdata1,newdata2,newdata3,newdata4,newdata5,newdata6,newdata7,newdata8,newdata9){
data.pls<-plsr(protein~.,8,data=data,validation="LOO")#making a pls model
newdata1.pred<-predict(data.pls,8,newdata=newdata1) #using the model to predict test sets
newdata2.pred<-predict(data.pls,8,newdata=newdata2)
newdata3.pred<-predict(data.pls,8,newdata=newdata3)
newdata4.pred<-predict(data.pls,8,newdata=newdata4)
newdata5.pred<-predict(data.pls,8,newdata=newdata5)
newdata6.pred<-predict(data.pls,8,newdata=newdata6)
newdata7.pred<-predict(data.pls,8,newdata=newdata7)
newdata8.pred<-predict(data.pls,8,newdata=newdata8)
newdata9.pred<-predict(data.pls,8,newdata=newdata9)
pred.bias1<-mean(newdata1.pred-newdata1[742]) #calculating the prediction bias
pred.bias2<-mean(newdata2.pred-newdata2[742])
pred.bias3<-mean(newdata3.pred-newdata3[742]) #[742] reference values in column742
pred.bias4<-mean(newdata4.pred-newdata4[742])
pred.bias5<-mean(newdata5.pred-newdata5[742])
pred.bias6<-mean(newdata6.pred-newdata6[742])
pred.bias7<-mean(newdata7.pred-newdata7[742])
pred.bias8<-mean(newdata8.pred-newdata8[742])
pred.bias9<-mean(newdata9.pred-newdata9[742])
r<-c(R2(data.pls,"train"),RMSEP(data.pls,"train"),pred.bias1,
pred.bias2,pred.bias3,pred.bias4,pred.bias5,pred.bias6,
pred.bias7,pred.bias8,pred.bias9)
return(r)
}
selecting n number of subsets (based on an answer from my question[1]: Select several subsets by taking different row interval and appy function to all subsets
and applying cpo function to each subset I tried
Edited based on #Gavin advice
FO03 <- function(data, nSubsets, nSkip){
outList <- vector("list", 11)
names(outList) <- c("R2train","RMSEPtrain", paste("bias", 1:9, sep = ""))
sub <- vector("list", length = nSubsets) # sub is the n number subsets created by selecting rows
names(sub) <- c( paste("sub", 1:nSubsets, sep = ""))
totRow <- nrow(data)
for (i in seq_len(nSubsets)) {
rowsToGrab <- seq(i, totRow, nSkip)
sub[[i]] <- data[rowsToGrab ,]
}
for(i in sub) { #for every subset in sub i want to apply cpo
outList[[i]] <- cpo(data=sub,newdata1=gag11p,newdata2=gag12p,newdata3=gag13p,
newdata4=gag21p,newdata5=gag22p,newdata6=gag23p,
newdata7=gag31p,newdata8=gag32p,newdata9=gag33p) #new data are test sets loaded in the workspace
}
return(outlist)
}
FOO3(GAGp,10,10)
when I try this I keep getting 'Error in eval(expr, envir, enclos) : object 'protein' not found' not found.
Protein is used in the plsr formula of cpo, and is in the data set.
I then tried to use the plsr function directly as seen below
FOO4 <- function(data, nSubsets, nSkip){
outList <- vector("list", 11)
names(outList) <- c("R2train","RMSEPtrain", paste("bias", 1:9, sep = ""))
sub <- vector("list", length = nSubsets)
names(sub) <- c( paste("sub", 1:nSubsets, sep = ""))
totRow <- nrow(data)
for (i in seq_len(nSubsets)) {
rowsToGrab <- seq(i, totRow, nSkip)
sub[[i]] <- data[rowsToGrab ,]
}
cal<-vector("list", length=nSubsets) #for each subset in sub make a pls model for protein
names(cal)<-c(paste("cal",1:nSubsets, sep=""))
for(i in sub) {
cal[[i]] <- plsr(protein~.,8,data=sub,validation="LOO")
}
return(outlist) # return is just used to end script and check if error still occurs
}
FOO4(gagpm,10,10)
When I tried this I get the same error 'Error in eval(expr, envir, enclos) : object 'protein' not found'.
Any advice on how to deal with this and make the function work will be much appreciated.
I suspect the problem is immediately at the start of FOO3():
FOO3 <- function(data, nSubsets, nSkip) {
outList <- vector("list", r <- c(R2(data.pls,"train"), RMSEP(data.pls,"train"),
pred.bias1, pred.bias2, pred.bias3, pred.bias4, pred.bias5,
pred.bias6, pred.bias7, pred.bias8, pred.bias9))
Not sure what you are trying to do when creating outList, but vector() has two arguments and you seem to be assigning to r a vector of numerics that you want R to use as the length argument to vector().
Here you are using the object data.pls and this doesn't exist yet - and never will in the frame of FOO3() - it is only ever created in cpo().
Your second loop looks totally wrong - you are not assigning the output from cpo() to anything. I suspect you wanted:
outList <- vector("list", 11)
names(outList) <- c("R2train","RMSEPtrain", paste("bias", 1:9, sep = ""))
....
for(i in subset) {
outList[[i]] <- cpo(....)
}
return(outList)
But that depends on what subset is etc. You also haven't got the syntax for this loop right. You have
for(i in(subset)) {
when it should be
for(i in subset) {
And subset and data aren't great names as these are common R functions and modelling arguments.
There are lots of problems with your code. Try to start simple and build up from there.
I have managed to achieved what i wanted using this, if there is a better way of doing it (i'm sure there must be) I'm eager to learn.This function preforms the following task
1. select "n" number of subsets from a dataframe
2. For each subset created, a plsr model is made
3. Each plsr model is used to predict 9 test sets
4. For each prediction, the prediction bias is calculated
far5<- function(data, nSubsets, nSkip){
sub <- vector("list", length = nSubsets)
names(sub) <- c( paste("sub", 1:nSubsets, sep = ""))
totRow <- nrow(data)
for (i in seq_len(nSubsets)) {
rowsToGrab <- seq(i, totRow, nSkip)
sub[[i]] <- data[rowsToGrab ,]} #sub is the subsets created
mop<- lapply(sub,cpr2) #assigning output from cpr to mop
names(mop)<-c(paste("mop", mop, sep=""))
return(names(mop))
}
call: far5(data,nSubsets, nSkip))
The first part -selecting the subsets is based on the answer to my question Select several subsets by taking different row interval and appy function to all subsets
I was then able to apply the function cpr2 to the subsets created using "lapply" instead of the "for' loop as was previously done.
cpr2 is a modification of cpo, for which only data is supplied, and the new data to be predicted is used directly in the function as shown below.
cpr2<-function(data){
data.pls<-plsr(protein~.,8,data=data,validation="LOO") #make plsr model
gag11p.pred<-predict(data.pls,8,newdata=gag11p) #predict each test set
gag12p.pred<-predict(data.pls,8,newdata=gag12p)
gag13p.pred<-predict(data.pls,8,newdata=gag13p)
gag21p.pred<-predict(data.pls,8,newdata=gag21p)
gag22p.pred<-predict(data.pls,8,newdata=gag22p)
gag23p.pred<-predict(data.pls,8,newdata=gag23p)
gag31p.pred<-predict(data.pls,8,newdata=gag31p)
gag32p.pred<-predict(data.pls,8,newdata=gag32p)
gag33p.pred<-predict(data.pls,8,newdata=gag33p)
pred.bias1<-mean(gag11p.pred-gag11p[742]) #calculate prediction bias
pred.bias2<-mean(gag12p.pred-gag12p[742])
pred.bias3<-mean(gag13p.pred-gag13p[742])
pred.bias4<-mean(gag21p.pred-gag21p[742])
pred.bias5<-mean(gag22p.pred-gag22p[742])
pred.bias6<-mean(gag23p.pred-gag23p[742])
pred.bias7<-mean(gag31p.pred-gag31p[742])
pred.bias8<-mean(gag32p.pred-gag32p[742])
pred.bias9<-mean(gag33p.pred-gag33p[742])
r<-signif(c(pred.bias1,pred.bias2,pred.bias3,pred.bias4,pred.bias5,
pred.bias6,pred.bias7,pred.bias8,pred.bias9),2)
out<-c(R2(data.pls,"train",ncomp=8),RMSEP(data.pls,"train",ncomp=8),r)
return(out)
} #signif use to return 2 decimal place for prediction bias
call:cpr2(data)
I was able to use this to solve my problem, however since the amount of new data to be predicted was only nine, it was possible to list them out as i did. If there is a more generalized way to do this I'm interested in learning.