How to implement lapply function in R using package "sparklyr" - r

I am pretty new to Spark, I have tried to look for something on the web but I haven't found anything satisfactory.
I have always run parallel computations using the command mclapply and I like its structure (i.e., first parameter used as scrolling index, second argument the function to be parallelized, and then other optional parameters passed to the function).
Now I am trying to do kind of the same thing via Spark, i.e., I would like to distribute my computations among all the node of the Spark cluster. This is shortly what I have learned and how I think the code should be structured (I'm using the package sparklyr):
I create a connection to Spark using the command spark_connect;
I copy my data.frame in the Spark environment with copy_to and access it through its tibble;
I would like to implement a "Spark-friendly" version of mclapply, but I have seen there is no similar function in the package (I have seen there exists the function spark.lapply in the SparkR package, but unfortunately it is not in the CRAN anymore).
Here below, a simple test script I have implemented that works using the function mclapply.
#### Standard code that works with mclapply #########
dfTest = data.frame(X = rep(1, 10000), Y = rep(2, 10000))
.testFunc = function(X = 1, df, str) {
rowSelected = df[X, ]
y = as.numeric(rowSelected[1] + rowSelected[2])
return(list(y = y, str = str))
}
lOutput = mclapply(X = 1 : nrow(dfTest), FUN = .testFunc, df = dfTest,
str = "useless string", mc.cores = 2)
######################################################
###### Similar code that should work with Spark ######
library(sparklyr)
sc = spark_connect(master = "local")
dfTest = data.frame(X = rep(1, 10000), Y = rep(2, 10000))
.testFunc = function(X = 1, df, str) {
rowSelected = df[X, ]
nSum = as.numeric(rowSelected[1] + rowSelected[2])
return(list(nSum = nSum, str = str))
}
dfTest_tbl = copy_to(sc, dfTest, "test_tbl", overwrite = TRUE)
# Apply similar function mclapply to dfTest_tbl, that works with
# Spark
# ???
######################################################
If someone has already found a solution for this, then it will be great. Also other references/guides/links are more than welcome. Thanks!

sparklyr
spark_apply is existing function you're looking for:
spark_apply(sdf, function(data) {
...
})
Please refer to Distributed R in sparklyr documentation for details.
SparkR
With SparkR use gapply / gapplyCollect
gapply(df, groupingCols, function(data) {...} schema)
dapply / dapplyCollect
dapply(df, function(data) {...}, schema)
UDFs. Refer to
dapply docs
gapply docs
for details.
Be warned that all solutions are inferior compared to native Spark code and should be avoided when high performance is required.

sparklyr::spark_apply now can support pass some external variables like models as context.
Here is my example to run xgboost model on sparklyr:
bst <- xgboost::xgb.load("project/models/xgboost.model")
res3 <- spark_apply(x = ft_union_price %>% sdf_repartition(partitions = 1500, partition_by = "uid"),
f = inference_fn,
packages = F,
memory = F,
names = c("uid",
"action_1",
"pred"),
context = {model <- bst})

Related

Automatic creation and use of custom made function in R - in for each loop - storing the result in one DF - 3D array

few days ago I ask this topic about calling a custom made function within a loop that was well resolved by a combination of
eval(parse(text = Function text))
here is the link: Automatic creation and use of custom made function in R.
This allowed me to work with for loop and call automatically the function I need from a Data frame storing the body of the function to create.
Now I would like to bring the question to a next level. My problem is computation time. I need to evaluate something like 52 indices from a hyperspectrial image. this means that in R my hyperspectral image is loaded as a 3d array of 512x512x204 bands.
what I would like to do is run the evaluation of the indices in parallel to reduce the computation time.
here a dummy example to what I would like to emulate, but not in parallel computing.
# create a fake matrix rappresenting my Hyperpectral image
HYPR_IMG=array(NA,dim=c(5,3,4))
HYPR_IMG[,,1]=1
HYPR_IMG[,,2]=2
HYPR_IMG[,,3]=3
HYPR_IMG[,,4]=4
image.plot(HYPR_IMG[,,1], zlim=c(0,20))
image.plot(HYPR_IMG[,,2], zlim=c(0,20))
image.plot(HYPR_IMG[,,3], zlim=c(0,20))
image.plot(HYPR_IMG[,,4], zlim=c(0,20))
#create a fake DF for simulating my indices stored in the dataframe
IDXname=c("IDX1","IDX2","IDX3","IDX4")
IDXFunc=c("HYPR_IMG[,,1] + 3*HYPR_IMG[,,2]",
"HYPR_IMG[,,3] + HYPR_IMG[,,2]",
"HYPR_IMG[,,4] + HYPR_IMG[,,2] - HYPR_IMG[,,3]",
"HYPR_IMG[,,1] + HYPR_IMG[,,4] + 4*HYPR_IMG[,,2] + HYPR_IMG[,,3]")
IDX_DF=as.data.frame(cbind(IDXname,IDXFunc))
# that was what I did before
Store_DF=data.frame(NA)
for (i in 1: length(IDX_DF$IDXname)) {
IDX_ID=IDX_DF$IDXname[i]
IDX_Fun_tmp=IDX_DF$IDXFunc[which(IDX_DF$IDXname==IDX_ID)] #use for extra care to select the right fuction
IDXFunc_call=paste("IDXfun_tmp=function(HYPR_IMG){",IDX_Fun_tmp,"}",sep="")
eval(parse(text = IDXFunc_call))
IDX_VAL=IDXfun_tmp (HYPR_IMG)
image.plot(IDX_VAL,zlim=c(0,20)); title(main=IDX_ID)
temp_DF=as.vector(IDX_VAL)
Store_DF=cbind(Store_DF,temp_DF)
names(Store_DF)[i+1] <- as.vector(IDX_ID)
}
my final goal is to have the very same Store_DF ,storing all the Indices value. Here I have a for loop but using a foreach loop things should speed up. if needed I am working with windows 8 or more as OS.
Is it really possible ?
Will I be able at the end, to reduce the overall computational time having the same Store_DF dataframe or somthing simlar like a matrix with the columns names?
Thanks a lot!!!
For the specific example using either the build in parallelization of a package like data.table or a parallel apply might be more beneficial.
Below is a minimal example of how to achieve the results using a parApply from the parallel package. Note the output is a matrix, which actually yields slightly better performance in base R (not the case necessarily in tidyverse or data.table). In case the data.frame structure is vital you will have to convert it with data.frame.
cl <- parallel::makeCluster( parallel::detectCores() )
result <- parallel::parApply(cl = cl, X = IDX_DF, MARGIN = 1, FUN = function(x, IMAGES){
IDX_ID <- x[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", x[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(IMAGES))
names(IDX_VAL) <- IDX_ID
IDX_VAL
}, IMAGES = HYPR_IMG)
colnames(result) = IDXname
IDXname
parallel::stopCluster(cl)
Please note the stopCluster(cl) which is important to shut down any loose R sessions.
Benchmark results (4 tiny cores):
Unit: milliseconds
expr min lq mean median uq max neval
Loop 8.420432 9.027583 10.426565 9.272444 9.943783 26.58623 100
Parallel 1.382324 1.491634 2.038024 1.554690 1.907728 18.23942 100
For replications of benchmarks the code has been provided below:
cl <- parallel::makeCluster( parallel::detectCores() )
microbenchmark::microbenchmark(
Loop = {
Store_DF=data.frame(NA)
for (i in 1: length(IDX_DF$IDXname)) {
IDX_ID = IDX_DF$IDXname[i]
IDX_Fun_tmp = IDX_DF$IDXFunc[which(IDX_DF$IDXname == IDX_ID)] #use for extra care to select the right function
eval(parse(text = paste0("IDXfun_tmp = function(HYPR_IMG){", IDX_Fun_tmp, "}")))
IDX_VAL = IDXfun_tmp(HYPR_IMG)
#Plotting in parallel is not a good idea. It will most often not work but might make the R session crash or slow down significantly (at best the latter at worst the prior)
#image.plot(IDX_VAL, zlim = c(0,20)); title(main = IDX_ID)
temp_DF = as.vector(IDX_VAL)
Store_DF = cbind(Store_DF,temp_DF)
names(Store_DF)[i+1] <- as.vector(IDX_ID)
}
rm(Store_DF)
},
Parallel = {
result <- parallel::parApply(cl = cl, X = IDX_DF, MARGIN = 1, FUN = function(x, IMAGES){
IDX_ID <- x[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", x[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(IMAGES))
names(IDX_VAL) <- IDX_ID
IDX_VAL
}, IMAGES = HYPR_IMG)
colnames(result) = IDXname
rm(result)
}
)
parallel::stopCluster(cl)
Edit: Using the foreach package
After a few comments about performance issues (maybe due to memory), i decided to make an illustration of how one could obtain the same result using the foreach package. A few notes:
The foreach package uses iterators. As standard it can be used like a for loop, where it will iterate over each column in a data.frame
Like other parallel implementations in R, if you are on Windows, often you will have to export the data used for calculations. It can sometimes be avoided with some fiddling and foreach sometimes will let you not export data. When this is, is unclear from the documentation.
The output of the foreach will be combined either as a list or as defined by the .combine argument, which can be rbind, cbind or any other function.
There is a lot of comments, making the code seem alot longer than it actually it. Removing comments and blank lines, it is 9 lines longer.
Below is the code which will yield the same output as above. Note i have used the data.table package. For more information about this package i suggest their wikipedia on github.
cl <- parallel::makeCluster( parallel::detectCores() )
#Foeach uses doParallel for the parallization
doParallel::registerDoParallel(cl)
#To iterate over the rows, we need to use iterators
# if foreach is given a matrix it will be converted to a column iterators
rowIterator <- iterators::iter(IDX_DF, by = "row")
library(foreach)
result <-
foreach(
#Supply the iterator
row = rowIterator,
#Specify if the calculations needs to be in order. If not then we can get better performance not doing so
.inorder = FALSE,
#In most foreach loops you will have to export the data you need for the calculations
# it worked without doing so for me, in which case it is faster if the exported stuff is large
#.export = c("HYPR_IMG"),
#We need to say how the output should be merged. If nothing is given it will be output as a list
#data.table rbindlist is faster than rbind (returns a data.table)
.combine = function(...)data.table::rbindlist(list(...)) ,
#otherwise we could've used:
#.combine = rbind
#if we dont use rbind or cbind (i used data.table::rbindlist for speed)
# we will have to tell if it can take more than 1 argument
.multicombine = TRUE
) %dopar% #Specify how to do the calculations. %do% loop. %dopar% parallel loop. %:% nested loops (next foreach tells how we do the loop)
{ #to do stuff in parallel we use the %dopar%. Alternative %do%. For multiple foreach we split each of them by %:%
IDX_ID <- row[["IDXname"]]
eval(parse(text = paste0("IDXfun_tmp <- function(HYPR_IMG){", row[["IDXFunc"]], "}")))
IDX_VAL <- as.vector(IDXfun_tmp(HYPR_IMG))
data.frame(ID = IDX_ID, IDX_VAL)
}
#output is saved in result
result
result_reformatted <- dcast(result[,indx := 1:.N, by = ID],
indx~ID,
value.var = "IDX_VAL")
#if we dont want to use data.table we could use unstack instead
unstack(test, IDX_VAL ~ ID)

How to use packages in spark_apply in R?

I require to use R packages with spark_apply function in the sparklyr package. The rdocumentation is not quite clear. I tried to make spark_apply work by following this link. It worked for the first part with the following modifications.
The working part:
library(sparklyr)
spark_apply_bundle(packages = T, base_path = getwd())
bundle <- paste(getwd(), list.files()[grep("\\.tar$",list.files())][1], sep = "/")
hdfs_path <- "hdfs://<my-ip>/user/hadoop/R/packages/packages.tar"
system("hdfs dfs -moveFromLocal", bundle, "hdfs://<my-ip>/user/hadoop/R/packages")
config <- spark_config()
config$sparklyr.shell.files <- "hdfs://<my-ip>/user/hadoop/R/packages/packages.tar"
sc <- spark_connect(master = "yarn-client",
version = "2.4.0",
config = config)
mtcars_sparklyr <- copy_to(sc, mtcars)
However, when I try to use svm function within the spark_apply, it does not work using the packages argument.
result <- mtcars_sparklyr %>%
spark_apply(
function(d) {
fit <- svm(d$mpg, d$wt)
sum(fit$residuals ^ 2)
},
group_by = "cyl",
packages = bundle
)
The following, on the other hand, works. This is if I pass the svm function within the context. However, I require the packages argument to work because I have several packages and their functions to use within the spark_apply.
result <- mtcars_sparklyr %>%
spark_apply(
function(d) {
fit <- svm(d$mpg, d$wt)
sum(fit$residuals ^ 2)
},
group_by = "cyl",
context = {svm <- e1071::svm}
)

Parallelize MSGARCH's function (FitML) on R

I'm using MSGARCH (R's packages) on windows 10
I should fitting any markov switching model many times (12.500 with a while and for loop) with this code
X = CreateSpec(variance.spec = list(model = c()),
distribution.spec = list(distribution = c()))
Y = FitML(data, spec = X)
How to parallelize the last function (FitML)?? I'd like to run many FitML() function for various X values
Assuming you already have a list of X:s, let us call it Xs, then you can call FitML() on each of the elements as:
Ys <- lapply(Xs, FUN = FitML)
The above applies the function to the elements sequentially. To do the same in parallel, you can use the future.apply package part of the future ecosystem (I'm the author). The following parallelizes on your local machine and works on all operating systems:
library(future.apply)
plan(multiprocess)
Ys <- future_lapply(Xs, FUN = FitML)
If there is a random-number-generation (RNG) component to FitML(), then you need to use:
Ys <- future_lapply(Xs, FUN = FitML, future.seed = TRUE)
to make sure you use proper random numbers.
If you don't specify plan(), or specify plan(sequentially), it will run sequentially.

sparklyr spark_apply user defined function error

I'm trying to pass a custom R function inside spark_apply but keep running into issues and cant figure out what some of the errors mean.
library(sparklyr)
sc <- spark_connect(master = "local")
perf_df <- data.frame(predicted = c(5, 7, 20),
actual = c(4, 6, 40))
perf_tbl <- sdf_copy_to(sc = sc,
x = perf_df,
name = "perf_table")
#custom function
ndcg <- function(predicted_rank, actual_rank) {
# x is a vector of relevance scores
DCG <- function(y) y[1] + sum(y[-1]/log(2:length(y), base = 2))
DCG(predicted_rank)/DCG(actual_rank)
}
#works in R using R data frame
ndcg(perf_df$predicted, perf_df$actual)
#does not work
perf_tbl %>%
spark_apply(function(e) ndcg(e$predicted, e$actual),
names = "ndcg")
Ok, i'm seeing two possible problems.
(1)-spark_apply prefers functions that have one parameter, a dataframe
(2)-you may need to make a package depending on how complex the function in.
let's say you modify ndcg to receive a dataframe as the parameter.
ndcg <- function(dataset) {
predicted_rank <- dataset$predicted
actual_rank <- dataset$actual
# x is a vector of relevance scores
DCG <- function(y) y[1] + sum(y[-1]/log(2:length(y), base = 2))
DCG(predicted_rank)/DCG(actual_rank)
}
And you put that in a package called ndcg_package
now your code will be similar to:
spark_apply(perf_tbl, ndcg, packages = TRUE, names = "ndcg")
Doing this from memory, so there may be a few typos, but it'll get you close.

How does foreach package scope R Environments when using as.formula, SE dplyr, and lapply?

I have a function where I dynamically build multiple formulas as strings and cast them to a formulas with as.formula. I then call that function in a parallel process using doSNOW and foreach and use those formulas through dplyr::mutate_.
When I use lapply(formula_list, as.formula) I get the error could not find function *custom_function* when run in parallel, though it works fine when run locally. However, when I use lapply(formula_list, function(x) as.formula(x) it works both in parallel and locally.
Why? What's the correct way to understand the environments here and the "right" way to code it?
I do get a warning that says: In e$fun(obj, substitute(ex), parent.frame(), e$data) : already exporting variable(s): *custom_func*
A minimal reproducible example is below.
# Packages
library(dplyr)
library(doParallel)
library(doSNOW)
library(foreach)
# A simple custom function
custom_sum <- function(x){
sum(x)
}
# Functions that call create formulas and use them with nse dplyr:
dplyr_mut_lapply_reg <- function(df){
my_dots <- setNames(
object = lapply(list("~custom_sum(Sepal.Length)"), as.formula),
nm = c("Sums")
)
return(
df %>%
group_by(Species) %>%
mutate_(.dots = my_dots)
)
}
dplyr_mut_lapply_lambda <- function(df){
my_dots <- setNames(
object = lapply(list("~custom_sum(Sepal.Length)"), function(x) as.formula(x)),
nm = c("Sums")
)
return(
df %>%
group_by(Species) %>%
mutate_(.dots = my_dots)
)
}
#1. CALLING BOTH LOCALLY
dplyr_mut_lapply_lambda(iris) #works
dplyr_mut_lapply_reg(iris) #works
#2. CALLING IN PARALLEL
#Faux Parallel Setup
cl <- makeCluster(1, outfile="")
registerDoSNOW(cl)
# Call Lambda Version WORKS
foreach(j = 1,
.packages = c("dplyr", "tidyr"),
.export = lsf.str()
) %dopar% {
dplyr_mut_lapply_lambda(iris)
}
# Call Regular Version FAILS
foreach(j = 1,
.packages = c("dplyr", "tidyr"),
.export = lsf.str()
) %dopar% {
dplyr_mut_lapply_reg(iris)
}
# Close Cluster
stopCluster(cl)
EDIT: In my original post title I wrote that I was using nse, but I really meant using standard evaluation. Whoops. I have changed this accordingly.
I don't have an exact answer to why here, but the future package (I'm the author) handles these type of "tricky" globals - they are tricky because they are not part of a package and they are nested, i.e. one global calls another global. For example, if you use:
library("doFuture")
cl <- parallel::makeCluster(1, outfile = "")
plan(cluster, workers = cl)
registerDoFuture()
that problematic "Call Regular Version FAILS" case should now work.
Now, the above uses parallel::makeCluster() which defaults to type = "PSOCK", whereas if you load doSNOW you get snow::makeCluster() which defaults to type = "MPI". Unfortunately, a full MPI backend is yet not implemented for the future package. Thus, if you're looking for an MPI solution, this won't help you (yet).

Resources