JAGS not recognizing values stored in R's global environment - r

I'm running JAGS with R v3.6.1 and Rstudio v1.2 in Windows 10, using package R2jags. JAGS does not appear to find the stored values that I've created in R for MCMC settings such as n.iter, n.burn-in, etc. because my code:
out2 <- jags.parallel (data = data, inits = inits, parameters.to.save = parameters,
model.file = "C:\\Users\\jj1995\\Desktop\\VIP\\Thanakorn\\DLM\\district model.txt", n.chains = n.chain,
n.thin = n.thin, n.iter = n.iter, n.burnin = n.burn)
Produces the error
Error in checkForRemoteErrors(lapply(cl, recvResult)) :
3 nodes produced errors; first error: object 'n.burn' not found
If I replace a stored value's name with a number (n.burnin = 10000), it will return the same error, but for a different MCMC setting. I assume JAGS will also be unable to find the lists and dataframes I've stored in the global environment also. I thought my system's firewall was preventing JAGS from accessing the global environment, so I disabled my antivirus measures and I ran both R and Rstudio as an administrator. This did not solve the problem.

Related

Why does running the tutorial example fail with Rstan?

I installed the RStan successfully, the library loads. I try to run a tutorial example (https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started#example-1-eight-schools). After applying the stan function
fit1 <- stan(
file = "schools.stan", # Stan program
data = schools_data, # named list of data
chains = 4, # number of Markov chains
warmup = 1000, # number of warmup iterations per chain
iter = 2000, # total number of iterations per chain
cores = 1, # number of cores (could use one per chain)
refresh = 0 # no progress shown
)
I get the following error:
*Error in compileCode(f, code, language = language, verbose = verbose) :
C:\rtools42\x86_64-w64-mingw32.static.posix\bin/ld.exe: file1c34165764a4.o:file1c34165764a4.cpp:(.text$_ZN3tbb8internal26task_scheduler_observer_v3D0Ev[_ZN3tbb8internal26task_scheduler_observer_v3D0Ev]+0x1d): undefined reference to `tbb::internal::task_scheduler_observer_v3::observe(bool)'C:\rtools42\x86_64-w64-mingw32.static.posix\bin/ld.exe: file1c34165764a4.o:file1c34165764a4.cpp:(.text$_ZN3tbb10interface623task_scheduler_observerD1Ev[_ZN3tbb10interface623task_scheduler_observerD1Ev]+0x1d): undefined reference to `tbb::internal::task_scheduler_observer_v3::observe(bool)'C:\rtools42\x86_64-w64-mingw32.static.posix\bin/ld.exe: file1c34165764a4.o:file1c34165764a4.cpp:(.text$_ZN3tbb10interface623task_scheduler_observerD1Ev[_ZN3tbb10interface623task_scheduler_observerD1Ev]+0x3a): undefined reference to `tbb::internal::task_scheduler_observer_v3::observe(bool)'C:\rtools42\x86_64-w64-mingw32.static.posix\bin/ld.exe: file1c34165764a4.o:file1c34165764a4.cpp:(.text$_ZN3tbb10interface
Error in sink(type = "output") : invalid connection*
Simply running example(stan_model, run.dontrun=T) gives the same error.
What does this error mean?
Is rtools wrongly installed? My PATH seems to contain the correct folder C:\\rtools42\\x86_64-w64-mingw32.static.posix\\bin;. Is something wrong with the version of my Rstan package? I am struggling to interpret this error?
What to try to solve it?
Apparently Rtools42 in incompatible with the current version Rstan on CRAN. The solution that worked for me is found here: https://github.com/stan-dev/rstan/wiki/Configuring-C---Toolchain-for-Windows#r-42

Using "snow" parallel operations in bootstrap_parameters/model on merMod object (R)

I've been using bootstrap_parameters (parameters package in R) on generalised linear mixed models produced using glmmTMB. These work fine without parallel processing (parallel = "no") and also works fine on my old and slow mac using parallel = "multicore". I'm working on a new PC (Windows OS) so need to use parallel = "snow" however I get the following error:
system.time(b <- bootstrap_parameters(m1, iterations = 10, parallel = "snow", n_cpus = 6))
Error in data.frame(..., check.names = FALSE) :
arguments imply differing number of rows: 0, 1
In addition: Warning message:
In lme4::bootMer(model, boot_function, nsim = iterations, verbose = FALSE, :
some bootstrap runs failed (10/10)
Timing stopped at: 0.89 0.3 7.11
If I select n_cpus = 1, the function works or if I feed bootstrap_parameters or bootstrap_model an lm object (where the underlying code uses boot::boot) it also works fine. I have narrowed the problem down to bootMer (lme4). I suspect the dataset exported using clusterExport is landing in an environment that is different from where clustered bootMer function is looking. The following is a reproduceable example
library(glmmTMB)
library(parameters)
library(parallel)
library(lme4)
m1 <- glmmTMB(count ~ mined + (1|site), zi=~mined,
family=poisson, data=Salamanders)
summary(m1)
cl <- makeCluster(6)
clusterEvalQ(cl, library("lme4"))
clusterExport(cl, varlist = c("Salamanders"))
system.time(b <- bootstrap_parameters(m1, iterations = 10, parallel = "snow", n_cpus = 6))
stopCluster(cl)
Any ideas on solving this problem?
You need to clusterEvalQ(cl, library("glmmTMB")). From https://github.com/glmmTMB/glmmTMB/issues/843:
This issue is more or less resolved by a documentation patch (we need to explicitly clusterEvalQ(cl, library("glmmTMB"))). The only question is whether we can make this any easier for users. There are two problems here: (1) when the user sets up their own cluster rather than leaving it to be done in bootMer, more explicit clusterEvalQ/clusterExport stuff is necessary in any case; (2) bootMer internally does parallel::clusterExport(cl, varlist=getNamespaceExports("lme4")) if it is setting up the cluster (not if the cluster is set up and passed to bootMer by the user), but we wouldn't expect it to extend the same courtesy to glmmTMB ...

How to save / load a random forest model created via h2o4gpu library in R? [duplicate]

I have created a R model using mlr and h2o package as below
library(h2o)
rfh20.lrn = makeLearner("classif.h2o.randomForest", predict.type = "prob")
Done the model tunings and model initiates h2o JVM and connects R to h2o cluster, modelling is done and I saved the model as .rds file.
saveRDS(h2orf_mod, "h2orf_mod.rds")
I do the prediction as
pred_h2orf <- predict(h2orf_mod, newdata = newdata)
then i shutdown h2o
h2o.shutdown()
Later I re-call the saved model
h2orf_mod <- readRDS("h2orf_mod.rds")
Initiate h2o so JVM connects R to h2o cluster
h2o.init()
Now the model is from local saved location, cluster doesn't know the model, when i do prediction, I get error which is obvious
ERROR: Unexpected HTTP Status code: 404 Not Found (url = http://localhost:54321/4/Predictions/models/DRF_model_R_1553297204511_743/frames/data.frame_sid_b520_1)
water.exceptions.H2OKeyNotFoundArgumentException
[1] "water.exceptions.H2OKeyNotFoundArgumentException: Object 'DRF_model_R_1553297204511_743' not found in function: predict for argument: model"
Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page, : ERROR MESSAGE: Object 'DRF_model_R_1553297204511_743' not found in function: predict for argument: model
May I know how to handle this, whether the saved model uploaded to cluster or something else, as every time building the model is NOT the effective way.
As per the comment instead of saving model using saveDRS/readRDS, save model as
h2oModelsaved <- h2o.saveModel(object = h2orf_model, path = "C:/User/Models/")
Re-call model
h2o.init()
h2oModelLoaded <- h2o.loadModel(h2oModelsaved)
Convert the test data as h2o Frame
newdata <- as.h2o(testdata)
Then Call the predict
pred_h2orf2 <- predict(h2oModelLoaded, newdata = newdata)
This works perfect

What does this error mean while running the ksvm of kernlab package in R

I am calling the ksvm method of the kernlab package in R using the following syntax
svmFit = ksvm(x=solTrainXtrans, y=solTrainYSVM, kernel="stringdot", kpar="automatic", C=1, epsilon=0.1)
The x parameter is a data.frame with feature values and the y parameter is a list with various values.
I get the following error while run the above line.
Error in do.call(kernel, kpar) : second argument must be a list
What is it trying to tell me here?
Try setting kpar = list(length = 4, lambda = 0.5)
Does it help?

Error occurring in caret when running on a cluster

I am running the train function in caret on a cluster via doRedis. For the most part, it works, but every so often I get errors at the very end of this nature:
error calling combine function:
<simpleError: obj$state$numResults <= obj$state$numValues is not TRUE>
and
Error in names(resamples) <- gsub("^\\.", "", names(resamples)) :
attempt to set an attribute on NULL
when I run traceback() I get:
5: nominalTrainWorkflow(dat = trainData, info = trainInfo, method = method,
ppOpts = preProcess, ctrl = trControl, lev = classLevels,
...)
4: train.default(x, y, weights = w, ...)
3: train(x, y, weights = w, ...)
2: train.formula(couple ~ ., training.balanced, method = "nnet",
preProcess = "range", tuneGrid = nnetGrid, MaxNWts = 2200)
1: caret::train(couple ~ ., training.balanced, method = "nnet",
preProcess = "range", tuneGrid = nnetGrid, MaxNWts = 2200)
These errors are not easily reproducible (i.e. they happen sometimes, but not consistently) and only occur at the end of the run. The stdout on the cluster shows all tasks running and completed, so I am a bit flummoxed.
Has anyone encountered these errors? and if so understand the cause and even better a fix?
I imagine you've already solved this problem, but I ran into the same issue on my cluster consisting of linux and windows systems. I was running the server on ubuntu 14.04 and had noticed the warnings when starting the server service about having 'transparent huge pages' enabled in the linux kernel. I ignored that message and began running training exercises where most of the machines were maxed out with workers. I received the same error at the end of the run:
error calling combine function:
<simpleError: obj$state$numResults <= obj$state$numValues is not TRUE>
After a lot of head scratching and useless tinkering, I decided to address that warning by following the instructions here: http://ubuntuforums.org/showthread.php?t=2255151
Essentially, I installed hugeadm using:
sudo apt-get install hugeadm
Then disabled the transparent pages using:
hugeadm --thp-never
Note that this change will be undone on restart of the computer.
When I re-ran my training process it ran without any errors.
Hope that helps.
Cheers,
Eric

Resources