it's the first i use mclapply to run parallel script on multiple process, but the problem that i've tried the script on my laptop and it worked very well and filled the dataframe correctly, but now when i run the script on my office pc, when the printing ends and it's time to collect the data, the script stops with this error :
Error: cannot allocate vector of size 80 Kb
fun <- function(testdf) {
l=12000
errordf=data.frame()
errordf <- mclapply(1:nrow(15000), function(i)
{
for (ind in 1:nrow(testdf))
{
if( i >= l/2 ){
testdf[ind,]$X = testdf[ind,]$pos * 2
} else
{
testdf[ind,]$X = testdf[ind,]$pos/l
}
}
permdf <- testdf
lapply(1:100, function(j)
{ permdf$X<- sample(permdf$X,nrow(permdf), replace=FALSE)
fit=lm(X ~ gx, permdf) #linear regression calculation
regerror=sum(residuals(fit)^2)
data.frame(pc=i,error=regerror )
})
}, mc.cores=3)
res<-NULL
tmp <- lapply(errordf, function(ii){
tmp <- lapply(ii, function(ij){ #rbind the data and return the dataframe
res<<- rbind(res, ij)
})
})
return (res)
}
testdf example:
structure(list(ax = c(-0.0242214, 0.19770304, 0.01587302, -0.0374415,
0.05079826, 0.12209738), gx = c(-0.3913043, -0.0242214, -0.4259067,
-0.725, -0.0374415, 0.01587302), pos = c(11222, 13564, 16532,
12543, 12534, 14354)), .Names = c("ax", "gx", "pos"), row.names = c(NA,
-6L), class = "data.frame")
i'm sure that the code is working (that's why i did not included the full code), because i tried it multiple times on my laptop, but when i tries it on my office pc it lunch this error.
any help would be appreciatd
Right now you don't use the apply as intended in you last double nested lapply loop, you might as well use a for loop instead of using lapply combined with a global variable. In addition, you continuously grow res, this is rather memory and time intensive. Normally, an lapply loop would not suffer from this problem, but your use of a global variable totally negates the advantage. You seem to have a double nested list you want to rbind. I would defintely not loop over the data structure, I would just use something along the lines of do.call("rbind", data_structure) to do this, although it is hard to provide concrete advice without a reproducible example. This solution does not suffer from the continuous growing problem you experience.
Related
few days ago I ask this topic about calling a custom made function within a loop that was well resolved by a combination of
eval(parse(text = Function text))
here is the link: Automatic creation and use of custom made function in R.
This allowed me to work with for loop and call automatically the function I need from a Data frame storing the body of the function to create.
Now I would like to bring the question to a next level. My problem is computation time. I need to evaluate something like 52 indices from a hyperspectrial image. this means that in R my hyperspectral image is loaded as a 3d array of 512x512x204 bands.
what I would like to do is run the evaluation of the indices in parallel to reduce the computation time.
here a dummy example to what I would like to emulate, but not in parallel computing.
# create a fake matrix rappresenting my Hyperpectral image
HYPR_IMG=array(NA,dim=c(5,3,4))
HYPR_IMG[,,1]=1
HYPR_IMG[,,2]=2
HYPR_IMG[,,3]=3
HYPR_IMG[,,4]=4
image.plot(HYPR_IMG[,,1], zlim=c(0,20))
image.plot(HYPR_IMG[,,2], zlim=c(0,20))
image.plot(HYPR_IMG[,,3], zlim=c(0,20))
image.plot(HYPR_IMG[,,4], zlim=c(0,20))
#create a fake DF for simulating my indices stored in the dataframe
IDXname=c("IDX1","IDX2","IDX3","IDX4")
IDXFunc=c("HYPR_IMG[,,1] + 3*HYPR_IMG[,,2]",
"HYPR_IMG[,,3] + HYPR_IMG[,,2]",
"HYPR_IMG[,,4] + HYPR_IMG[,,2] - HYPR_IMG[,,3]",
"HYPR_IMG[,,1] + HYPR_IMG[,,4] + 4*HYPR_IMG[,,2] + HYPR_IMG[,,3]")
IDX_DF=as.data.frame(cbind(IDXname,IDXFunc))
# that was what I did before
Store_DF=data.frame(NA)
for (i in 1: length(IDX_DF$IDXname)) {
IDX_ID=IDX_DF$IDXname[i]
IDX_Fun_tmp=IDX_DF$IDXFunc[which(IDX_DF$IDXname==IDX_ID)] #use for extra care to select the right fuction
IDXFunc_call=paste("IDXfun_tmp=function(HYPR_IMG){",IDX_Fun_tmp,"}",sep="")
eval(parse(text = IDXFunc_call))
IDX_VAL=IDXfun_tmp (HYPR_IMG)
image.plot(IDX_VAL,zlim=c(0,20)); title(main=IDX_ID)
temp_DF=as.vector(IDX_VAL)
Store_DF=cbind(Store_DF,temp_DF)
names(Store_DF)[i+1] <- as.vector(IDX_ID)
}
my final goal is to have the very same Store_DF ,storing all the Indices value. Here I have a for loop but using a foreach loop things should speed up. if needed I am working with windows 8 or more as OS.
Is it really possible ?
Will I be able at the end, to reduce the overall computational time having the same Store_DF dataframe or somthing simlar like a matrix with the columns names?
Thanks a lot!!!
For the specific example using either the build in parallelization of a package like data.table or a parallel apply might be more beneficial.
Below is a minimal example of how to achieve the results using a parApply from the parallel package. Note the output is a matrix, which actually yields slightly better performance in base R (not the case necessarily in tidyverse or data.table). In case the data.frame structure is vital you will have to convert it with data.frame.
cl <- parallel::makeCluster( parallel::detectCores() )
result <- parallel::parApply(cl = cl, X = IDX_DF, MARGIN = 1, FUN = function(x, IMAGES){
IDX_ID <- x[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", x[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(IMAGES))
names(IDX_VAL) <- IDX_ID
IDX_VAL
}, IMAGES = HYPR_IMG)
colnames(result) = IDXname
IDXname
parallel::stopCluster(cl)
Please note the stopCluster(cl) which is important to shut down any loose R sessions.
Benchmark results (4 tiny cores):
Unit: milliseconds
expr min lq mean median uq max neval
Loop 8.420432 9.027583 10.426565 9.272444 9.943783 26.58623 100
Parallel 1.382324 1.491634 2.038024 1.554690 1.907728 18.23942 100
For replications of benchmarks the code has been provided below:
cl <- parallel::makeCluster( parallel::detectCores() )
microbenchmark::microbenchmark(
Loop = {
Store_DF=data.frame(NA)
for (i in 1: length(IDX_DF$IDXname)) {
IDX_ID = IDX_DF$IDXname[i]
IDX_Fun_tmp = IDX_DF$IDXFunc[which(IDX_DF$IDXname == IDX_ID)] #use for extra care to select the right function
eval(parse(text = paste0("IDXfun_tmp = function(HYPR_IMG){", IDX_Fun_tmp, "}")))
IDX_VAL = IDXfun_tmp(HYPR_IMG)
#Plotting in parallel is not a good idea. It will most often not work but might make the R session crash or slow down significantly (at best the latter at worst the prior)
#image.plot(IDX_VAL, zlim = c(0,20)); title(main = IDX_ID)
temp_DF = as.vector(IDX_VAL)
Store_DF = cbind(Store_DF,temp_DF)
names(Store_DF)[i+1] <- as.vector(IDX_ID)
}
rm(Store_DF)
},
Parallel = {
result <- parallel::parApply(cl = cl, X = IDX_DF, MARGIN = 1, FUN = function(x, IMAGES){
IDX_ID <- x[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", x[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(IMAGES))
names(IDX_VAL) <- IDX_ID
IDX_VAL
}, IMAGES = HYPR_IMG)
colnames(result) = IDXname
rm(result)
}
)
parallel::stopCluster(cl)
Edit: Using the foreach package
After a few comments about performance issues (maybe due to memory), i decided to make an illustration of how one could obtain the same result using the foreach package. A few notes:
The foreach package uses iterators. As standard it can be used like a for loop, where it will iterate over each column in a data.frame
Like other parallel implementations in R, if you are on Windows, often you will have to export the data used for calculations. It can sometimes be avoided with some fiddling and foreach sometimes will let you not export data. When this is, is unclear from the documentation.
The output of the foreach will be combined either as a list or as defined by the .combine argument, which can be rbind, cbind or any other function.
There is a lot of comments, making the code seem alot longer than it actually it. Removing comments and blank lines, it is 9 lines longer.
Below is the code which will yield the same output as above. Note i have used the data.table package. For more information about this package i suggest their wikipedia on github.
cl <- parallel::makeCluster( parallel::detectCores() )
#Foeach uses doParallel for the parallization
doParallel::registerDoParallel(cl)
#To iterate over the rows, we need to use iterators
# if foreach is given a matrix it will be converted to a column iterators
rowIterator <- iterators::iter(IDX_DF, by = "row")
library(foreach)
result <-
foreach(
#Supply the iterator
row = rowIterator,
#Specify if the calculations needs to be in order. If not then we can get better performance not doing so
.inorder = FALSE,
#In most foreach loops you will have to export the data you need for the calculations
# it worked without doing so for me, in which case it is faster if the exported stuff is large
#.export = c("HYPR_IMG"),
#We need to say how the output should be merged. If nothing is given it will be output as a list
#data.table rbindlist is faster than rbind (returns a data.table)
.combine = function(...)data.table::rbindlist(list(...)) ,
#otherwise we could've used:
#.combine = rbind
#if we dont use rbind or cbind (i used data.table::rbindlist for speed)
# we will have to tell if it can take more than 1 argument
.multicombine = TRUE
) %dopar% #Specify how to do the calculations. %do% loop. %dopar% parallel loop. %:% nested loops (next foreach tells how we do the loop)
{ #to do stuff in parallel we use the %dopar%. Alternative %do%. For multiple foreach we split each of them by %:%
IDX_ID <- row[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", row[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(HYPR_IMG))
data.frame(ID = IDX_ID, IDX_VAL)
}
#output is saved in result
result
result_reformatted <- dcast(result[,indx := 1:.N, by = ID],
indx~ID,
value.var = "IDX_VAL")
#if we dont want to use data.table we could use unstack instead
unstack(test, IDX_VAL ~ ID)
There are several packages in R to simplify running code in parallel, like foreach and future. Most of these have constructs which are like lapply or a for loop: they carry on until all the tasks have finished.
Is there a simple parallel version of Find? That is, I would like to run several tasks in parallel. I don't need all of them to finish, I just need to get the first one that finishes (maybe with a particular result). After that the other tasks can be killed, or left to finish on their own.
Conceptual code:
hunt_needle <- function (x, y) x %in% (y-1000):y
x <- sample.int(1000000, 1)
result <- parallel_find(seq(1000, 1000000, 1000), hunt_needle)
# should return the first value for which hunt_needle is true
You can use shared memory so that processes can communicate with one another.
For that, you can use package bigstatsr (disclaimer: I'm the author).
Choose a block size and do:
# devtools::install_github("privefl/bigstatsr")
library(bigstatsr)
# Data example
cond <- logical(1e6)
cond[sample(length(cond), size = 1)] <- TRUE
ind.block <- bigstatsr:::CutBySize(length(cond), block.size = 1000)
cl <- parallel::makeCluster(nb_cores())
doParallel::registerDoParallel(cl)
# This value (in an on-disk matrix) is shared by processes
found_it <- FBM(1, 1, type = "integer", init = 0L)
library(foreach)
res <- foreach(ic = sample(rows_along(ind.block)), .combine = 'c') %dopar% {
if (found_it[1]) return(NULL)
ind <- bigstatsr:::seq2(ind.block[ic, ])
find <- which(cond[ind])
if (length(find)) {
found_it[1] <- 1L
return(ind[find[1]])
} else {
return(NULL)
}
}
parallel::stopCluster(cl)
# Verification
all.equal(res, which(cond))
Basically, when a solution is found, you don't need to do some computations anymore, and others know it because you put a 1 in found_it which is shared between all processes.
As your question is not reproducible and I don't understand everything you need, you may have to adapt this solution a little bit.
Here is my R Script that works just fine:
perc.rank <- function(x) trunc(rank(x)) / length(x) * 100.0
library(dplyr)
setwd("~/R/xyz")
datFm <- read.csv("yellow_point_02.csv")
datFm <- filter(datFm, HRA_ClassHRA_Final != -9999)
quant_cols <- c("CL_GammaRay_Despiked_Spline_MLR", "CT_Density_Despiked_Spline_FinalMerged",
"HRA_PC_1HRA_Final", "HRA_PC_2HRA_Final","HRA_PC_3HRA_Final",
"SRES_IMGCAL_SHIFT2VL_Slab_SHIFT2CL_DT", "Ultrasonic_DT_Despiked_Spline_MLR")
# add an extra column to datFm to store the quantile value
for (column_name in quant_cols) {
datFm[paste(column_name, "quantile", sep = "_")] <- NA
}
# initialize an empty dataframe with the new column names appended
newDatFm <- datFm[0,]
# get the unique values for the hra classes
hraClassNumV <- sort(unique(datFm$HRA_ClassHRA_Final))
# loop through the vector and create currDatFm and append it to newDatFm
for (i in hraClassNumV) {
currDatFm <- filter(datFm, HRA_ClassHRA_Final == i)
for (column_name in quant_cols) {
currDatFm <- within(currDatFm,
{
CL_GammaRay_Despiked_Spline_MLR_quantile <- perc.rank(currDatFm$CL_GammaRay_Despiked_Spline_MLR)
CT_Density_Despiked_Spline_FinalMerged_quantile <- perc.rank(currDatFm$CT_Density_Despiked_Spline_FinalMerged)
HRA_PC_1HRA_Final_quantile <- perc.rank(currDatFm$HRA_PC_1HRA_Final)
HRA_PC_2HRA_Final_quantile <- perc.rank(currDatFm$HRA_PC_2HRA_Final)
HRA_PC_3HRA_Final_quantile <- perc.rank(currDatFm$HRA_PC_3HRA_Final)
SRES_IMGCAL_SHIFT2VL_Slab_SHIFT2CL_DT_quantile <- perc.rank(currDatFm$SRES_IMGCAL_SHIFT2VL_Slab_SHIFT2CL_DT)
Ultrasonic_DT_Despiked_Spline_MLR_quantile <- perc.rank(currDatFm$Ultrasonic_DT_Despiked_Spline_MLR)
}
)
}
newDatFm <- rbind(newDatFm, currDatFm)
}
newDatFm <- newDatFm[order(newDatFm$Core_Depth),]
# head(newDatFm, 10)
write.csv(newDatFm, file = "Ricardo_quantiles.csv")
I have a few questions though. Every R book or video that I have read or watched, recommends using the 'apply' family of language constructs over the classic 'for' loop stating that apply is much faster.
So the first question is: how would you write it using apply (or tapply or some other apply)?
Second, is this really true though that apply is much faster than for? The csv file 'yellow_point_02.csv' has approx. 2500 rows. This script runs almost instantly on my Macbook Pro which has 16 Gig of memory.
Third, See the 'quant_cols' vector? I created it so that I could write a generic loop (for columm_name in quant_cols) ....But I could not make it to work. So I hard-coded the column names post-fixed with '_quantile' and called the 'perc.rank' many times. Is there a way this could be made dynamic? I tried the 'paste' stuff that I have in my script, but that did not work.
On the positive side though, R seems awesome in its ability to cut through the 'Data Wrangling' tasks with very few statements.
Thanks for your time.
I'm writing the code to get the data from Uncomtrade- an UN's database. Because the database has a usage limit of 100 enquiries/hour so I need to put a time out there.
I want to write the code with tryCatch that will:
Automatically set programs to time out everytime the error for max limit appears
Rerun for the current level of i,j and k if a connection error orcurs
My current code still work though but I want to learn how to use tryCatch too
And also is there a way to get rid of the for loops. Can the apply family function be used here?
Thanks guys
n=0
a<-c()
for (i in (1996:2014)) {
for (j in c("0301","0302","0303","0304","0305","0306","0307","0308")) {
for (k in c("704","116","360","418","458","104","608","702","764")) {
s2<-paste(i,j,k,sep="")
a<-c(a,s2)
print (s2)
n<-n+1
if(n<=100) {
s1 <- get.Comtrade(r=k, ps=i, rg="2", cc=j, fmt="csv",px="H0")
Sys.sleep (1)
s1<-do.call(rbind.data.frame,s1)
library(foreign)
write.dta(s1,file=paste("D:/unTrade/",s2,".dta"))
}
else {
print(n)
print(s2)
print("reset here")
n=0
Sys.sleep(3610)
}
}
}
}
I can't really help you with the TryCatch(); I don't have the experience myself.
Regarding the for loops, this is one solution (although I think in these cases the for-loops are not that evil; vectorization really counts in all kinds of matrix operations etc).
dat <- expand.grid(i = 1996:1999, j = c("0301","0302","0303","0304","0305","0306","0307","0308"), k = c("704","116","360","418","458","104","608","702","764"))
library(dplyr)
dat %>% group_by(i, j, k) %>%
do({
cat('s1 <- get.Comtrade(r=', .$k, ', ps=', .$i, ', cc=', .$j, ', rg=\"2\", fmt=\"csv\",px=\"H0\")\n')
flush.console()
# return(s1)
})
From your own code s1 (also) appears to be a data.frame, so in this case, the dplyr do() nicely glues all these data frames together.
HTH
I have a small R script of 14 functions and want to run it for 81 files. While I have read several posts on Stack Overflow that address similar issues, I am still having trouble getting this working. I am using a for loop and rbind.
All functions within the { } of the loop work. I have tested them without the for loop and I get the vector of data that I need. But when I run the for loop I only get an output for the last file in the folder. I am not sure what is going on.
Is the for loop working right (is it iterating through the files) and simply overwriting the previous runs? If the for loop is working then I assume I have a problem with my rbind. Or, is the for loop only running the last file in list.files()?
In the end, I want a matrix (or table) with the results of the 14 functions for each of the 81 files.
Here is the code:
res=(1:14)
for(i in list.files())
{
nd = read.csv(i, header= TRUE, row.names =1, check.names = FALSE)
mx = as.matrix(nd)
res[1]=basename(i)
res[2]=-99 #this is just a place holder
res[3]=gden(mx)
res[4]=centralization(mx,degree)
deg = degree(mx, gmode="graph", diag=FALSE, rescale=FALSE)
res[5]=mean(deg)
res[6]=sd(deg)
res[7]=max(deg)
res[8]=min(deg)
Ndeg = degree(mx, gmode="graph", diag=FALSE, rescale=TRUE)*1000
res[9]=mean(Ndeg)
res[10]=sd(Ndeg)
res[11]=max(Ndeg)
res[12]=min(Ndeg)
iso = isolates(mx, diag=FALSE)
res[13]=length(iso)
res[14]=nrow(mx)
}
results=rbind(res)
results
Make your set of functions together a new function and sapply it to every element of list.files():
out <- sapply(list.files(), function(i){
nd = read.csv(i, header= TRUE, row.names =1, check.names = FALSE)
mx = as.matrix(nd)
res = numeric(14)
res[1]=basename(i)
res[2]=-99 #this is just a place holder
res[3]=gden(mx)
res[4]=centralization(mx,degree)
deg = degree(mx, gmode="graph", diag=FALSE, rescale=FALSE)
res[5]=mean(deg)
res[6]=sd(deg)
res[7]=max(deg)
res[8]=min(deg)
Ndeg = degree(mx, gmode="graph", diag=FALSE, rescale=TRUE)*1000
res[9]=mean(Ndeg)
res[10]=sd(Ndeg)
res[11]=max(Ndeg)
res[12]=min(Ndeg)
iso = isolates(mx, diag=FALSE)
res[13]=length(iso)
res[14]=nrow(mx)
return(res)
}
out
you have to have rbind(res) inside the loop,something like this
results = rbind(res), but that is not enough. something like results = rbind(results,res)
It depends how you want to store them as an array of array etc..
You'd better also lapply sapply etc.. instead of loop
I also posted this question on my university listserv and a fellow student provided the following fixes. And now it works :)
res=(1:14)
summary=(1:14)
for(i in list.files())
{
....code as above.....
summary=rbind(summary, res)
}
summary
# then to put into a .csv
write.csv(summary, "nameoffile.csv")