Biomod 2 Maxent - r

I am new using r and the package biomod2.
I have the problem that when i try to run the model MAXENT.Phillips by biomod 2 i get the next error:
Error in file(file, "rt") : no se puede abrir la conexión
Además: Warning message:
In file(file, "rt") :
no fue posible abrir el archivo 'A.Bicornis/models/A.BicornisFirstModeling/A.Bicornis_AllData_RUN2_MAXENT.Phillips_outputs/A.Bicornis_AllData_RUN2_Pred_swd.csv': No such file or directory
I have update the package and still not working. When I run the code, I'll have the maxent window when I introduce the variables and points. Then I run it in the maxent window and get this error.
The code that I use:
predictor<-stack(Bio2,Bio3,Bio8,Bio10,Bio13,Bio14)
DataSpecies1 <-data.frame(workBiomod1,package="biomod2")
myRespName1 <-as.character('A.Bicornis')
colnames(DataSpecies1)[1]<-"A.Bicornis"
myResp1 <- as.data.frame(DataSpecies1[,myRespName1])
myRespXY1 <- DataSpecies1[,c("DECLONGITUDE","DECLATITUDE")]
myExpl1<-predictor
myBiomodData1 <- BIOMOD_FormatingData(resp.var = myResp1,
expl.var = myExpl1,
resp.xy = myRespXY1,
resp.name = myRespName1)
myBiomodData1
plot(myBiomodData1)
myBiomodOption1 <- BIOMOD_ModelingOptions(
RF=list(ntree=128),MAXENT.Phillips=list(path_to_maxent.jar="C:\\Users\\nikom\\Desktop\\Maxent\\maxent",
memory_allocated=NULL, background_data_dir="D:\\TFM\\Especies\\",
linear=FALSE,quadratic=FALSE,product=TRUE,
threshold=TRUE,hinge=TRUE))
myBiomodModelOut1 <- BIOMOD_Modeling(
myBiomodData1,
models = c('RF','MAXENT.Phillips'),
models.options = myBiomodOption1,
NbRunEval=10,
DataSplit=80,
Prevalence=0.5,
VarImport=0,
models.eval.meth = c('TSS','ROC','ACCURACY'),
SaveObj = TRUE,
rescal.all.models = TRUE,
do.full.models = FALSE,
modeling.id = paste(myRespName1,"FirstModeling",sep=""))
myBiomodModelOut1
myBiomodModelEval1 <- get_evaluations(myBiomodModelOut1)
dimnames(myBiomodModelEval1)
myBiomodModelEval1["TSS","Testing.data","RF",,]
myBiomodModelEval1["ROC","Testing.data",,,]
get_variables_importance(myBiomodModelOut1)
myBiomodEM1 <- BIOMOD_EnsembleModeling(
modeling.output = myBiomodModelOut1,
chosen.models = 'all',
em.by='all',
eval.metric = c('TSS'),
eval.metric.quality.threshold = c(0.7),
prob.mean = T,
prob.cv = T,
prob.ci = T,
prob.ci.alpha = 0.05,
prob.median = T,
committee.averaging = T,
prob.mean.weight = T,
prob.mean.weight.decay = 'proportional' )
workBiomod1 is composed by XY coordinates, values of 1 (presence) and 0 (ausence), and values in the corresponding variables.
Meanwhile, random forest is working well.

Related

uwot is throwing an error running the Monocle3 R package's "find_gene_module()" function, likely as an issue with how my data is formatted

I am trying to run the Monocle3 function find_gene_modules() on a cell_data_set (cds) but am getting a variety of errors in this. I have not had any other issues before this. I am working with an imported Seurat object. My first error came back stating that the number of rows were not the same between my cds and cds#preprocess_aux$gene_loadings values. I took a look and it seems my gene loadings were a list under cds#preprocess_aux#listData$gene_loadings. I then ran the following code to make a dataframe version of the gene loadings:
test <- seurat#assays$RNA#counts#Dimnames[[1]]
test <- as.data.frame(test)
cds#preprocess_aux$gene_loadings <- test
rownames(cds#preprocess_aux$gene_loadings) <- cds#preprocess_aux$gene_loadings[,1]
Which created a cds#preprocess_aux$gene_loadings dataframe with the same number of rows and row names as my cds. This resolved my original error but now led to a new error being thrown from uwot as:
15:34:02 UMAP embedding parameters a = 1.577 b = 0.8951
Error in uwot(X = X, n_neighbors = n_neighbors, n_components = n_components, :
No numeric columns found
Running traceback() produces the following information.
> traceback()
4: stop("No numeric columns found")
3: uwot(X = X, n_neighbors = n_neighbors, n_components = n_components,
metric = metric, n_epochs = n_epochs, alpha = learning_rate,
scale = scale, init = init, init_sdev = init_sdev, spread = spread,
min_dist = min_dist, set_op_mix_ratio = set_op_mix_ratio,
local_connectivity = local_connectivity, bandwidth = bandwidth,
gamma = repulsion_strength, negative_sample_rate = negative_sample_rate,
a = a, b = b, nn_method = nn_method, n_trees = n_trees, search_k = search_k,
method = "umap", approx_pow = approx_pow, n_threads = n_threads,
n_sgd_threads = n_sgd_threads, grain_size = grain_size, y = y,
target_n_neighbors = target_n_neighbors, target_weight = target_weight,
target_metric = target_metric, pca = pca, pca_center = pca_center,
pca_method = pca_method, pcg_rand = pcg_rand, fast_sgd = fast_sgd,
ret_model = ret_model || "model" %in% ret_extra, ret_nn = ret_nn ||
"nn" %in% ret_extra, ret_fgraph = "fgraph" %in% ret_extra,
batch = batch, opt_args = opt_args, epoch_callback = epoch_callback,
tmpdir = tempdir(), verbose = verbose)
2: uwot::umap(as.matrix(preprocess_mat), n_components = max_components,
metric = umap.metric, min_dist = umap.min_dist, n_neighbors = umap.n_neighbors,
fast_sgd = umap.fast_sgd, n_threads = cores, verbose = verbose,
nn_method = umap.nn_method, ...)
1: find_gene_modules(cds[pr_deg_ids, ], reduction_method = "UMAP",
max_components = 2, umap.metric = "cosine", umap.min_dist = 0.1,
umap.n_neighbors = 15L, umap.fast_sgd = FALSE, umap.nn_method = "annoy",
k = 20, leiden_iter = 1, partition_qval = 0.05, weight = FALSE,
resolution = 0.001, random_seed = 0L, cores = 1, verbose = T)
I really have no idea what I am doing wrong or how to proceed from here. Does anyone with experience with uwot know where my error is coming from? Really appreciate the help!

Error in `V<-`(`*tmp*`, value = `*vtmp*`) : invalid indexing

I used the bibliometrix function in R, and want to plot some useful graphs.
library(bibliometrix)
??bibliometrix
D<-readFiles("E:\\RE\\savedrecs.txt")
M <- convert2df(D,dbsource = "isi", format= "plaintext")
results <- biblioAnalysis(M ,sep = ";" )
S<- summary(object=results,k=10, pause=FALSE)
plot(x=results,k=10,pause=FALSE)
options(width=100)
S <- summary(object = results, k = 10, pause = FALSE)
NetMatrix <- biblioNetwork(M1, analysis = "co-occurrences", network = "author_keywords", sep = ";")
S <- normalizeSimilarity(NetMatrix, type = "association")
net <- networkPlot(S, n = 200, Title = "co-occurrence network",type="fruchterman", labelsize = 0.7, halo = FALSE, cluster = "walktrap",remove.isolates=FALSE, remove.multiple=FALSE, noloops=TRUE, weighted=TRUE)
res <- thematicMap(net, NetMatrix, S)
plot(res$map)
But in the net <- networkPlot(S, n = 200, Title = "co-occurrence network",type="fruchterman", labelsize = 0.7, halo = FALSE, cluster = "walktrap",remove.isolates=FALSE, remove.multiple=FALSE, noloops=TRUE, weighted=TRUE), it shows error
Error in V<-(*tmp*, value = *vtmp*) : invalid indexing
. Also I cannot do the CR, it always shows unlistCR. I cannot use the NetMatrix function neither.
Some help me plsssssssss
The problem is in the data itself not in the code you presented. When I downloaded the data from bibliometrix.com and changed M1 to M (typo?) in biblioNetwork function call everything worked perfectly. Please see the code below:
library(bibliometrix)
# Plot bibliometric analysis results
D <- readFiles("http://www.bibliometrix.org/datasets/savedrecs.txt")
M <- convert2df(D, dbsource = "isi", format= "plaintext")
results <- biblioAnalysis(M, sep = ";")
S <- summary(results)
plot(x = results, k = 10, pause = FALSE)
# Plot Bibliographic Network
options(width = 100)
S <- summary(object = results, k = 10, pause = FALSE)
NetMatrix <- biblioNetwork(M, analysis = "co-occurrences", network = "author_keywords", sep = ";")
S <- normalizeSimilarity(NetMatrix, type = "association")
net <- networkPlot(S, n = 200, Title = "co-occurrence network", type = "fruchterman",
labelsize = 0.7, halo = FALSE, cluster = "walktrap",
remove.isolates = FALSE, remove.multiple = FALSE, noloops = TRUE, weighted = TRUE)
# Plot Thematic Map
res <- thematicMap(net, NetMatrix, S)
str(M)
plot(res$map)

Error in checkForRemoteErrors(val) :

I am currently running an ensemble niche model analyses through a Linux cluster in a CentOs6 environment. The package I am using is SSDM. My code is as follows:
Env <- load_var(path = getwd(), files = NULL, format = c(".grd", ".tif", ".asc",
".sdat", ".rst", ".nc", ".envi", ".bil", ".img"), categorical = "af_anthrome.asc",
Norm = TRUE, tmp = TRUE, verbose = TRUE, GUI = FALSE)
Env
head(Env)
warnings()
Occurrences <- load_occ(path = getwd(), Env, file =
"Final_African_Bird_occurrence_rarefied_points.txt",
Xcol = "decimallon", Ycol = "decimallat", Spcol =
"species", GeoRes = FALSE,
sep = ",", verbose = TRUE, GUI = FALSE)
head(Occurrences)
warnings()
SSDM <- stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA",
"MAXENT", "ANN", "SVM"), Occurrences, Env, Xcol = "decimallon",
Ycol = "decimallat", Pcol = NULL, Spcol = "species", rep
= 1,
name = "Stack", save = TRUE, path = getwd(), PA = NULL,
cv = "holdout", cv.param = c(0.75, 1), thresh = 1001,
axes.metric = "Pearson", uncertainty = TRUE, tmp = TRUE,
ensemble.metric = c("AUC", "Kappa", "sensitivity", "specificity"), ensemble.thresh = c(0.75, 0.75, 0.75, 0.75), weight = TRUE,
method = "bSSDM", metric = "SES", range = NULL,
endemism = NULL, verbose = TRUE, GUI = FALSE, cores = 125)
save.stack(SSDM, name = "Bird", path = getwd(),
verbose = TRUE, GUI = FALSE)
When running the stack_modelling function I get this Error message:
Error in checkForRemoteErrors(val) :
125 nodes produced errors; first error: comparison of these types is not
implemented
Calls: stack_modelling ... clusterApply -> staticClusterApply ->
checkForRemoteErrors
In addition: Warning message:
In stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA", "MAXENT", :
It seems you attributed more cores than your CPU have !
Execution halted
Error in unserialize(node$con) : error reading from connection
Calls: <Anonymous> ... doTryCatch -> recvData -> recvData.SOCKnode ->
unserialize
In addition: Warning message:
In eval(e, x, parent.frame()) :
Incompatible methods ("Ops.data.frame", "Ops.factor") for "=="
Execution halted
I understand that I may have attributed more cores than I have access to but this same error message crops up when I use a fraction of the cores. I am not entirely sure what this error message is trying to tell me or how to fix it as I am new to working on a cluster. Is it a problem with parallel processing of the data? Is there a line of code which can help me fix this issue?
Thanks

Error in socketConnection in SSDM (from R)

I am currently running a stacked species distribution model through a linux cluster using the following code:
library(SSDM)
setwd("/home/nikhail1")
Env <- load_var(path = getwd(), files = NULL, format = c(".grd", ".tif",
".asc",
".sdat", ".rst", ".nc",
".envi", ".bil", ".img"), categorical = "af_anthrome.asc",
Norm = TRUE, tmp = TRUE, verbose = TRUE, GUI = FALSE)
Occurrences <- load_occ(path = getwd(), Env, file =
"Final_African_Bird_occurrence_rarefied_points.csv",
Xcol = "decimallon", Ycol = "decimallat", Spcol =
"species", GeoRes = FALSE,
sep = ",", verbose = TRUE, GUI = FALSE)
head(Occurrences)
warnings()
SSDM <- stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA",
"MAXENT", "ANN", "SVM"), Occurrences, Env, Xcol = "decimallon",
Ycol = "decimallat", Pcol = NULL, Spcol = "species", rep = 1,
name = "Stack", save = TRUE, path = getwd(), PA = NULL,
cv = "holdout", cv.param = c(0.75, 1), thresh = 1001,
axes.metric = "Pearson", uncertainty = TRUE, tmp = TRUE,
ensemble.metric = c("AUC", "Kappa", "sensitivity",
"specificity"), ensemble.thresh = c(0.75, 0.75, 0.75, 0.75), weight = TRUE,
method = "bSSDM", metric = "SES", range = NULL,
endemism = NULL, verbose = TRUE, GUI = FALSE, cores = 200)
save.stack(SSDM, name = "Bird", path = getwd(),
verbose = TRUE, GUI = FALSE)
I receive the following error message when trying to run my analyses:
Error in socketConnection("localhost", port = port, server = TRUE, blocking
= TRUE, :
all connections are in use
Calls: stack_modelling ... makePSOCKcluster -> newPSOCKnode ->
socketConnection
How do i increase the maximum number of connections? Can i do this within the SSDM package as parallel is built in. Do I have to apply a specific function from another package to ensure that my job runs smoothly across clusters?
Thank you for you help,
Nikhail
The maximum number of open connections you can have in R is 125. To increase the number of connections you can have open at the same time, you need to rebuild R from source. See
https://github.com/HenrikBengtsson/Wishlist-for-R/issues/28

Using fitCopula in R yields Error in optim: initial value in 'vmmin' is not finite

QUESTION: The following reproducable example yields the error:
Error in optim(start, loglikCopula, lower = lower, upper = upper,
method = method, : initial value in 'vmmin' is not finite
I couldn't find out how to overcome this problem.
To reproduce my data set: (simply copy paste and run the following code:)
#REPRODUCABLE DATA SET
###############################################
library(MASS)
library(stats)
sim.cop<- plackettCopula(param = 10)
set.seed(1)
u<-rCopula(n = 800,copula = sim.cop)
V1<-qt(p = u[,col=1],df = 3.5)
V1<-0.013*V1+0.0004
V2<-qt(p = u[,col=2],df = 3.5)
V2<-0.013*V2+0.0004
m1<-fitdistr(x = V1,densfun = "normal")
m2<-fitdistr(x = V2,densfun = "normal")
V1.u<- pnorm(q = V1,mean = m1$estimate["mean"] ,sd = m1$estimate["sd"])
V2.u<- pnorm(q = V2,mean = m2$estimate["mean"] ,sd = m2$estimate["sd"])
data.u<-cbind(V1.u,V2.u)
The command which yields the error
#fit copula
###############################################
library(copula)
cop<-normalCopula(param = 0.5,dim = 2)
fitting<-fitCopula(copula = cop,data = data.u,method = "ml")

Resources