I have a R code that does some distributed data preprocessing in sparklyr, and then collects the data to R local dataframe to finally save the result in the CSV. Everything works as expected and now I plan to re-use the spark context across multiple input files processing.
My code looks similar to this reproducible example:
library(dplyr)
library(sparklyr)
sc <- spark_connect(master = "local")
# Generate random input
matrix(rbinom(1000, 1, .5), ncol=1) %>% write.csv('/tmp/input/df0.csv')
matrix(rbinom(1000, 1, .5), ncol=1) %>% write.csv('/tmp/input/df1.csv')
# Multi-job input
input = list(
list(name="df0", path="/tmp/input/df0.csv"),
list(name="df1", path="/tmp/input/df1.csv")
)
global_parallelism = 2
results_dir = "/tmp/results2"
# Function executed on each file
f <- function (job) {
spark_df <- spark_read_csv(sc, "df_tbl", job$path)
local_df <- spark_df %>%
group_by(V1) %>%
summarise(n=n()) %>%
sdf_collect
output_path <- paste(results_dir, "/", job$name, ".csv", sep="")
local_df %>% write.csv(output_path)
return (output_path)
}
If I execute the function of a job inputs in sequential way with lapply everything works as expected:
> lapply(input, f)
[[1]]
[1] "/tmp/results2/df0.csv"
[[2]]
[1] "/tmp/results2/df1.csv"
However, if I plan to run it in parallel to maximize usage of spark context (if df0 spark processing is done and the local R is working on it, df1 can be already processed by spark):
> library(parallel)
> library(MASS)
> mclapply(input, f, mc.cores = global_parallelism)
*** caught segfault ***
address 0x560b2c134003, cause 'memory not mapped'
[[1]]
[1] "Error in as.vector(x, \"list\") : \n cannot coerce type 'environment' to vector of type 'list'\n"
attr(,"class")
[1] "try-error"
attr(,"condition")
<simpleError in as.vector(x, "list"): cannot coerce type 'environment' to vector of type 'list'>
[[2]]
NULL
Warning messages:
1: In mclapply(input, f, mc.cores = global_parallelism) :
scheduled core 2 did not deliver a result, all values of the job will be affected
2: In mclapply(input, f, mc.cores = global_parallelism) :
scheduled core 1 encountered error in user code, all values of the job will be affected
When I'm doing similar with Python and ThreadPoolExcutor, the spark context is shared across threads, same for Scala and Java.
Is this possible to reuse sparklyr context in parallel execution in R?
Yeah, unfortunately, the sc object, which is of class spark_connection, cannot be exported to another R process (even if forked processing is used). If you use the future.apply package, part of the future ecosystem, you can see this if you use:
library(future.apply)
plan(multicore)
## Look for non-exportable objects and given an error if found
options(future.globals.onReference = "error")
y <- future_lapply(input, f)
That will throw:
Error: Detected a non-exportable reference (‘externalptr’) in one of the
globals (‘sc’ of class ‘spark_connection’) used in the future expression
Related
I have this dataset that I'm trying to parse in R. The data from HMDB and the dataset name is Serum Metabolites (in a format of xml file). The xml file contains about 25K metabolites nodes, each I want to parse to sub-nodes
I have a code that parses the XML file to a list object in R.
Since the XML file is quite big and since for each metabolite there are about 12 sub-nodes I want, It takes a long time to parse the file. about 3 hours to 1,000 metabolites.
I'm trying to use the package parallel but receive and error.
The packages:
library("XML")
library("xml2")
library( "magrittr" ) #for pipe operator %>%
library("pbapply") # to track on progress
library("parallel")
The function:
# The function receives an XML file (its location) and returns a list of nodes
Short_Parser_HMDB <- function(xml.file_location){
start.time<- Sys.time()
# Read as xml file
doc <- read_xml( xml.file_location )
#get metabolite nodes (only first three used in this sample)
met.nodes <- xml_find_all( doc, ".//d1:metabolite" ) [1:1000] # [(i*1000+1):(1000*i+1000)] # [1:3]
#list of data.frame
xpath_child.v <- c( "./d1:accession",
"./d1:name" ,
"./d1:description",
"./d1:synonyms/d1:synonym" ,
"./d1:chemical_formula" ,
"./d1:smiles" ,
"./d1:inchikey" ,
"./d1:biological_properties/d1:pathways/d1:pathway/d1:name" ,
"./d1:diseases/d1:disease/d1:name" ,
"./d1:diseases/d1:disease/d1:references",
"./d1:kegg_id" ,
"./d1:meta_cyc_id"
)
child.names.v <- c( "accession",
"name" ,
"description" ,
"synonyms" ,
"chemical_formula" ,
"smiles" ,
"inchikey" ,
"pathways_names" ,
"diseases_name",
"references",
"kegg_id" ,
"meta_cyc_id"
)
#first, loop over the met.nodes
L.sec_acc <- parLapply(cl, met.nodes, function(x) { # pblapply to track progress or lapply but slows down dramticlly the function and parLapply fo parallel
#second, loop over the xpath desired child-nodes
temp <- parLapply(cl, xpath_child.v, function(y) {
xml_find_all(x, y ) %>% xml_text(trim = T) %>% data.frame( value = .)
})
#set their names
names(temp) = child.names.v
return(temp)
})
end.time<- Sys.time()
total.time<- end.time-start.time
print(total.time)
return(L.sec_acc )
}
Now create the enviroment :
# select the location where the XML file is
location= "D:/path/to/file//HMDB/DataSets/serum_metabolites/serum_metabolites.xml"
cl <-makeCluster(detectCores(), type="PSOCK")
clusterExport(cl, c("Short_Parser_HMDB", "cl"))
clusterEvalQ(cl,{library("parallel")
library("magrittr")
library("XML")
library("xml2")
})
And execute :
Short_outp<-Short_Parser_HMDB(location)
stopCluster(cl)
The error received:
> Short_outp<-Short_Parser_HMDB(location)
Error in checkForRemoteErrors(val) :
one node produced an error: invalid connection
base on those links, Tried to implement the parallel :
Parallel Processing in R
How to call global function from the parLapply function?
Error in R parallel:Error in checkForRemoteErrors(val) : 2 nodes produced errors; first error: cannot open the connection
but couldn't find invalid connection as an error
I'm using windows 10 the latest R version 4.0.2 (not sure if it's enough information)
Any hint or idea will be appreciated
I have written a function in R which extracts data from a database and builds a new table.
My new table is labeled with the date of the extract (build_date_0).
When I'm debugging my function I get the following warning when I look at my date string:
Browse[2]> build_date_0
[1] "2019-05-01"
Warning message:
In get(object, envir = currentEnv, inherits = TRUE) :
restarting interrupted promise evaluation
Questions:
What does this warning mean / what is happening (step-by-step/basics)?
Should I care?
In general how can I find out more about this error?
This is my code:
build_account_db = function(conn = connection_object
,various_inputs = 24){
browser()
# create connection objects
tabs_1 = dplyr::tbl(conn,in_schema("DB_1","VIEW_W") # some table
# create date string
build_date_0 = lubridate::today() %>% as.character()
build_date = str_replace_all(build_date_0,"-+","_")
db_name_1 = paste0('DATABASE.tab_1_',build_date)
db_name_2 = paste0('DATABASE.tab_2_',build_date)
# build query
query_text_1 = tabs_1 %>% select(COL_1) # some query
query_text_1 = tabs_1 %>% select(COL_2)
# build new tables
create_db = DBI::dbSendQuery(conn_t,paste('CREATE TABLE',db_name_1,'AS (',query_text_1,') WITH DATA PRIMARY INDEX (ID_1)'))
create_db2 = DBI::dbSendQuery(conn_t,paste('CREATE TABLE',db_name_2,'AS (',query_text_2,') WITH DATA PRIMARY INDEX (ID_1)'))
}
When I check a variable, I may or may not get this warning (it varies, even if I restart R, and run my code again with a cleared environment)
Browse[2]> build_date
[1] "2019-02-28 11:00:00 AEDT"
Warning message:
In get(object, envir = currentEnv, inherits = TRUE) :
restarting interrupted promise evaluation
What I've tried: I read this question, but it's more about suppressing the error. Also, google.
I found this link on promises and evaluation in R helpful for a related problem: https://mailund.dk/posts/promises-and-lazy-evaluation/. I wonder if after build_date_0 = lubridate::today() %>% as.character() if you add a call to just build_date_0 if that would solve the promise? Good luck!
I am trying to use the below code to make API calls in a parallel process to speed up the API calls. (I know this isn't the best way to speed up API calls but it works)
It only fails when I try to use parallel, otherwise it works. In the ldply function I am getting the below error:
Error in do.ply(i) :
task 1 failed - "object of type 'closure' is not subsettable"
In addition:
Warning messages:
1: : ... may be used in an incorrect context: ‘.fun(piece, ...)’
2: : ... may be used in an incorrect context: ‘.fun(piece, ...)’
any help would be appreciated!
One <- 26
cl<-makeCluster(4)
registerDoSNOW(cl)
func.time <- Sys.time()
## API CALL ONE FOR "kline"
url <- "https://api.binance.com"
path <- paste("/api/v1/klines?symbol=",pairs[1],"&interval=1m&limit=1", sep = "")
raw.results <- GET(url = url, path = path)
text_content <- content(raw.results, as = "text", encoding = "UTF-8")
kline <- data.frame(text_content %>% fromJSON())
kline$symbol <- pairs[1]
## API FUNCTION TO BE APPLIED FOR REST
loopfunction <- function(i){
url <- "https://api.binance.com"
path <- paste("/api/v1/klines?symbol=",pairs[i],"&interval=1m&limit=1", sep = "")
raw.results <- GET(url = url, path = path)
text_content <- content(raw.results, as = "text", encoding = "UTF-8")
kline_temp <- data.frame(text_content %>% fromJSON())
kline_temp$symbol <- pairs[i]
kline <- rbind(kline,kline_temp)
return(kline)
}
## DPLY PARALLEL FUNCTION
kline2 <- data.frame(ldply(2:(One - 1), .fun = loopfunction, .parallel = T, .paropts = c("httr", "jsonlite", "dplyr"))) ##"ONE" is a list varriable created earlier
stopCluster(cl)
func.end.time <- Sys.time()
func.tot.time <- func.end.time - func.time
Your question isn't fully reproducible, so the following is an educated guess.
Your loopfunction() references an object called pairs. It seems from your script that a variable called pairs is defined somewhere in your local environment. However, when loopfunction() is passed to ldply(), it no longer has access to that variable (ordinarily, it would, but parallelization requires fresh R environments to be created). Having failed to find an object called pairs in the environment, R continues searching, and finds a match in stats::pairs(). This is a plotting function, not a subsettable object like a vector or data frame. Hence the error message, "object of type 'closure' is not subsettable".
I'm not especially familiar with how ldply implements parallel processing, but you could probably modify your function definition like this:
loopfunction <- function(i, pairs) {
...[body of function]...
}
And pass pairs as an extra parameter in your ldply call:
kline2 <- data.frame(ldply(2:(One - 1), .fun = loopfunction, pairs = pairs, .parallel = T, .paropts = list(.packages = c("httr", "jsonlite", "dplyr"))))
When checking an R package, I got the warning
Warning: object 'xxx' is created by more than one data call
What causes this, and how can I fix it?
This warning occurs when multiple RData files in the data directory of the package store a variable with the same name.
To reproduce, we create a package and save the cars dataset twice, to different files:
library(devtools)
create("test")
dir.create("test/data")
save(cars, file = "test/data/cars1.RData")
save(cars, file = "test/data/cars2.RData")
check("test")
The output from check includes these lines:
Found the following significant warnings:
Warning: object 'cars' is created by more than one data call
If you receive this warning, you can find repeated variable names using:
rdata_files <- dir("test/data", full.names = TRUE, pattern = "\\.RData$")
var_names <- lapply(
rdata_files,
function(rdata_file)
{
e <- new.env()
load(rdata_file, envir = e)
ls(e)
}
)
Reduce(intersect, var_names)
## [1] "cars"
I want to find documents whose similarity between other doucuments are larger than a given value(0.1) by cutting documents into blocks.
library(tm)
data("crude")
sample.dtm <- DocumentTermMatrix(
crude, control=list(
weighting=function(x) weightTfIdf(x, normalize=FALSE),
stopwords=TRUE
)
)
step = 5
n = nrow(sample.dtm)
block = n %/% step
start = (c(1:block)-1)*step+1
end = start+step-1
j = unlist(lapply(1:(block-1),function(x) rep(((x+1):block),times=1)))
i = unlist(lapply(1:block,function(x) rep(x,times=(block-x))))
ij <- cbind(i,j)
library(skmeans)
getdocs <- function(k){
ci <- c(start[k[[1]]]:end[k[[1]]])
cj <- c(start[k[[2]]]:end[k[[2]]])
combi <- sample.dtm[ci]
combj < -sample.dtm[cj]
rownames(combi)<-ci
rownames(combj)<-cj
comb<-c(combi,combj)
sim<-1-skmeans_xdist(comb)
cat("Block", k[[1]], "with Block", k[[2]], "\n")
flush.console()
tri.sim<-upper.tri(sim,diag=F)
results<-tri.sim & sim>0.1
docs<-apply(results,1,function(x) length(x[x==TRUE]))
docnames<-names(docs)[docs>0]
gc()
return (docnames)
}
It works well when using apply
system.time(rmdocs<-apply(ij,1,getdocs))
When using parRapply
library(snow)
library(skmeans)
cl<-makeCluster(2)
clusterExport(cl,list("getdocs","sample.dtm","start","end"))
system.time(rmdocs<-parRapply(cl,ij,getdocs))
Error:
Error in checkForRemoteErrors(val) :
2 nodes produced errors; first error: attempt to set 'rownames' on an object with no dimensions
Timing stopped at: 0.01 0 0.04
It seems that sample.dtm coundn't be used in parRapply. I'm confused. Can anyone help me? Thanks!
In addition to exporting objects, you need to load the necessary packages on the cluster workers. In your case, the result of not doing so is that there isn't a dimnames method defined for "DocumentTermMatrix" objects, causing rownames<- to fail.
You can load packages on the cluster workers with the clusterEvalQ function:
clusterEvalQ(cl, { library(tm); library(skmeans) })
After doing that, rownames(combi)<-ci will work correctly.
Also, if you want to see the output from cat, you should use the makeCluster outfile argument:
cl <- makeCluster(2, outfile='')