I have R installed on a machine without internet access. Is it possible to update a specific package from a local source archive? What would be the right way to do it?
install.packages(pkgs, lib, repos = getOption("repos"),
contriburl = contrib.url(repos, type),
method, available = NULL, destdir = NULL,
dependencies = NA, type = getOption("pkgType"),
configure.args = getOption("configure.args"),
configure.vars = getOption("configure.vars"),
clean = FALSE, Ncpus = getOption("Ncpus", 1L),
verbose = getOption("verbose"),
libs_only = FALSE, INSTALL_opts, quiet = FALSE,
keep_outputs = FALSE, ...)
Use above command. for more help refer https://stat.ethz.ch/R-manual/R-devel/library/utils/html/install.packages.html
For updating: https://stat.ethz.ch/R-manual/R-devel/library/utils/html/update.packages.html
Related
I am trying to use the TreeTagger function in R for POS analysis (on a Mac laptop).
However, I keep getting an error message when running the following code:
tagged.text <- treetag(reviews,
treetagger = "kRp.env",
rm.sgml = TRUE,
lang = "en",
apply.sentc.end = TRUE,
sentc.end = c(".", "!", "?", ";", ":"),
encoding = NULL,
TT.options=list(path="~/Desktop/TreeTagger/", preset="en"),
debug = FALSE,
TT.tknz = TRUE,
format = "file",
stopwords = NULL,
stemmer = NULL)
Error Message:
Error in path.expand(path) : invalid 'path' argument
I set up the package as follows:
set.kRp.env(TT.cmd = "manual", lang = "en",
path = "~/Desktop/TreeTagger/",
preset = "eng",
validate = TRUE)
set.kRp.env(TT.cmd = "manual", lang = "en",
path = "~/Desktop/TreeTagger/bin",
preset = "eng",
validate = TRUE)
set.kRp.env(TT.cmd = "manual", lang = "en",
path = "~/Desktop/TreeTagger/cmd",
preset = "eng",
validate = TRUE)
Does anyone know what the issue here could be?
I am currently running an ensemble niche model analyses through a Linux cluster in a CentOs6 environment. The package I am using is SSDM. My code is as follows:
Env <- load_var(path = getwd(), files = NULL, format = c(".grd", ".tif", ".asc",
".sdat", ".rst", ".nc", ".envi", ".bil", ".img"), categorical = "af_anthrome.asc",
Norm = TRUE, tmp = TRUE, verbose = TRUE, GUI = FALSE)
Env
head(Env)
warnings()
Occurrences <- load_occ(path = getwd(), Env, file =
"Final_African_Bird_occurrence_rarefied_points.txt",
Xcol = "decimallon", Ycol = "decimallat", Spcol =
"species", GeoRes = FALSE,
sep = ",", verbose = TRUE, GUI = FALSE)
head(Occurrences)
warnings()
SSDM <- stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA",
"MAXENT", "ANN", "SVM"), Occurrences, Env, Xcol = "decimallon",
Ycol = "decimallat", Pcol = NULL, Spcol = "species", rep
= 1,
name = "Stack", save = TRUE, path = getwd(), PA = NULL,
cv = "holdout", cv.param = c(0.75, 1), thresh = 1001,
axes.metric = "Pearson", uncertainty = TRUE, tmp = TRUE,
ensemble.metric = c("AUC", "Kappa", "sensitivity", "specificity"), ensemble.thresh = c(0.75, 0.75, 0.75, 0.75), weight = TRUE,
method = "bSSDM", metric = "SES", range = NULL,
endemism = NULL, verbose = TRUE, GUI = FALSE, cores = 125)
save.stack(SSDM, name = "Bird", path = getwd(),
verbose = TRUE, GUI = FALSE)
When running the stack_modelling function I get this Error message:
Error in checkForRemoteErrors(val) :
125 nodes produced errors; first error: comparison of these types is not
implemented
Calls: stack_modelling ... clusterApply -> staticClusterApply ->
checkForRemoteErrors
In addition: Warning message:
In stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA", "MAXENT", :
It seems you attributed more cores than your CPU have !
Execution halted
Error in unserialize(node$con) : error reading from connection
Calls: <Anonymous> ... doTryCatch -> recvData -> recvData.SOCKnode ->
unserialize
In addition: Warning message:
In eval(e, x, parent.frame()) :
Incompatible methods ("Ops.data.frame", "Ops.factor") for "=="
Execution halted
I understand that I may have attributed more cores than I have access to but this same error message crops up when I use a fraction of the cores. I am not entirely sure what this error message is trying to tell me or how to fix it as I am new to working on a cluster. Is it a problem with parallel processing of the data? Is there a line of code which can help me fix this issue?
Thanks
I've been trying to set up a Selenium session with Rwebdriver, but so far with no success. I believe I have initiated a Selenium server, but whenever I run
`> Rwebdriver::start_session(root = "http://localhost:4444/wd/hub/")`
I get the following error:
Error in serverDetails$value[[1]] : subscript out of bounds
After I set options(error = recover) to run a debug, I get the following result:
function (root = NULL, browser = "firefox", javascriptEnabled = TRUE,
takesScreenshot = TRUE, handlesAlerts = TRUE, databaseEnabled = TRUE,
cssSelectorsEnabled = TRUE)
{
server <- list(desiredCapabilities = list(browserName = browser,
javascriptEnabled = javascriptEnabled, takesScreenshot = takesScreenshot,
handlesAlerts = handlesAlerts, databaseEnabled = databaseEnabled,
cssSelectorsEnabled = cssSelectorsEnabled))
newSession <- getURL(paste0(root, "session"), customrequest = "POST",
httpheader = c(`Content-Type` = "application/json;charset=UTF-8"),
postfields = toJSON(server))
serverDetails <- fromJSON(rawToChar(getURLContent(paste0(root,
"sessions"), binary = TRUE)))
sessionList <- list(time = Sys.time(), sessionURL = paste0(root,
"session/", serverDetails$value[[1]]$id))
class(sessionList) <- "RSelenium"
print("Started new session. sessionList created.")
seleniumSession <<- sessionList
}
The issue is at paste0(root, "session/", serverDetails$value[[1]]$id)), and true enough, whenever I print serverDetails$value, all that appears is list(). Does anyone know how to fix the issue? Thank you for your attention and patience in advance.
I am currently running a stacked species distribution model through a linux cluster using the following code:
library(SSDM)
setwd("/home/nikhail1")
Env <- load_var(path = getwd(), files = NULL, format = c(".grd", ".tif",
".asc",
".sdat", ".rst", ".nc",
".envi", ".bil", ".img"), categorical = "af_anthrome.asc",
Norm = TRUE, tmp = TRUE, verbose = TRUE, GUI = FALSE)
Occurrences <- load_occ(path = getwd(), Env, file =
"Final_African_Bird_occurrence_rarefied_points.csv",
Xcol = "decimallon", Ycol = "decimallat", Spcol =
"species", GeoRes = FALSE,
sep = ",", verbose = TRUE, GUI = FALSE)
head(Occurrences)
warnings()
SSDM <- stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA",
"MAXENT", "ANN", "SVM"), Occurrences, Env, Xcol = "decimallon",
Ycol = "decimallat", Pcol = NULL, Spcol = "species", rep = 1,
name = "Stack", save = TRUE, path = getwd(), PA = NULL,
cv = "holdout", cv.param = c(0.75, 1), thresh = 1001,
axes.metric = "Pearson", uncertainty = TRUE, tmp = TRUE,
ensemble.metric = c("AUC", "Kappa", "sensitivity",
"specificity"), ensemble.thresh = c(0.75, 0.75, 0.75, 0.75), weight = TRUE,
method = "bSSDM", metric = "SES", range = NULL,
endemism = NULL, verbose = TRUE, GUI = FALSE, cores = 200)
save.stack(SSDM, name = "Bird", path = getwd(),
verbose = TRUE, GUI = FALSE)
I receive the following error message when trying to run my analyses:
Error in socketConnection("localhost", port = port, server = TRUE, blocking
= TRUE, :
all connections are in use
Calls: stack_modelling ... makePSOCKcluster -> newPSOCKnode ->
socketConnection
How do i increase the maximum number of connections? Can i do this within the SSDM package as parallel is built in. Do I have to apply a specific function from another package to ensure that my job runs smoothly across clusters?
Thank you for you help,
Nikhail
The maximum number of open connections you can have in R is 125. To increase the number of connections you can have open at the same time, you need to rebuild R from source. See
https://github.com/HenrikBengtsson/Wishlist-for-R/issues/28
I have the following code in an Execute R Module.
# Input
data1 <- maml.mapInputPort(1) # Qualitative with 8 variables
install.packages("src/graphics.zip", lib.loc = ".", repos = NULL, verbose =
TRUE)
install.packages("src/grDevices.zip", lib.loc = ".", repos = NULL, verbose =
TRUE)
install.packages("src/stats.zip", lib.loc = ".", repos = NULL, verbose =
TRUE)
install.packages("src/utils.zip", lib.loc = ".", repos = NULL, verbose =
TRUE)
install.packages("src/MASS.zip", lib.loc = ".", repos = NULL, verbose =
TRUE)
success <- library("MASS", lib.loc = ".", logical.return = TRUE, verbose =
TRUE)
library(MASS)
mca <- mca(data1, nf = 10)
mca1 <- data.frame(mca$rs)
# Output
maml.mapOutputPort("mca1");
When I execute I am getting the following error:
RPackage library exception: Attempting to obtain R output before invoking execution process. (Error 1000)
But it is working fine in RStudio.
I also have a node that does the same process and it works without errors. I have executed it several times, sometimes it has worked for me and then it has returned error.
Please let me know what the issue is.
With regards,
Celia