0. Session information
> sessionInfo()
R version 4.1.0 (2021-05-18)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)
1. Summary of my issue
I am having a crash while using a modified function of response.plot2(), named response.plot3(). The problem is not the function in itself, as the same problem occurs with response.plot2().
2. Code
library(biomod2)
library(raster)
library(reshape)
library(ggplot2)
setwd("xxx")
# I load the modified version of response.plot2()
source("/response.plot_modified.R", local = TRUE)
sp <- "NAME"
baseline_EU <- readRDS("./data/baseline_EU.rds")
initial.wd <- getwd()
setwd("models")
# Loading of formatted data and models calibrated by biomod
load(paste0(sp, "/run.data"))
load(paste0(sp, "/model.runs"))
# Variables used for calibration
cur.vars <- model.runs#expl.var.names
# Loading model names into R memory
models.to.plot <- BIOMOD_LoadModels(model.runs)
# Calculation of response curves with all models (stored in the object resp which is an array)
resp <- response.plot3(models = models.to.plot,
Data = baseline_EU[[cur.vars]],
fixed.var.metric = "sp.mean",
show.variables = cur.vars,
run.data = run.data)
I have got 60 models and the code plot the first curve before aborting the session, with no further explanation.
3. What I unsuccessfully tried
(1) check that it was not a ram issue
(2) uninstall-reinstall all the packages and their dependencies
(3) uptade to the last R version
(4) go back to response.plot2() to see if the issue could come from response.plot3()
4. I found some similar errors which lead me to think that it might be a package issue
https://github.com/rstudio/rstudio/issues/9373
Call to library(raster) or require(raster) causes Rstudio to abort session
Now I presume that there is a problem either with the biomod2 or the raster packages, or maybe the R version?
I would greatly appreciate your help if you have any ideas.
Related
I am giving a try to the Newsmap package for topic classification (not geographical, but you know, implementing to other tasks...). I follow the instructions from the Quanteda tutorials website (here).
Everything runs smoothly until I try to predict the topic most strongly associated with each of my texts.
Here is the code:
labels <- types(toks_labelExclu)
dfmt_label <- dfm(toks_labelExclu, tolower = FALSE) # The dfm with the labels of my 35 topics
dfmt_feat <- dfm(toks_sent) %>%
dfm_trim(min_termfreq = 50) # The dfm with the features to be associated with my topics
model_nm <- textmodel_newsmap(dfmt_feat, dfmt_label)
coef(model_nm, n= 20)[labels] # All good so far
pred_nm <- predict(model_nm)
# Here is the snag: the function returns
Error in x[, feature] : Subscript out of bounds
Does anyone have an idea of where the error could come from?
For information, here is the sessionInfo:
R version 4.0.0 (2020-04-24)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.4
other attached packages:
[1] newsmap_0.7.1 quanteda_2.0.1
I've run into the following error that only occurs when I pass a model with more than 30 predictors to pdredge():
Error in sprintf(gettext(fmt, domain = domain), ...) :
invalid format '%d'; use format %f, %e, %g or %a for numeric objects
I'm on a windows machine running Microsoft R Open through RStudio:
R version 3.5.3 (2019-03-11)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
RStudio Version 1.0.153
MuMIn_1.43.6
Reproducible example:
library(MuMIn)
library(parallel)
#Random data: X1 as response, X2-X31 (30) predictors
var.30 <- data.frame(replicate(31,sample(0:100,75,rep=TRUE)))
#Random data: X1 as response, X2-X32 (31) predictors
var.31 <- data.frame(replicate(32,sample(0:100,75,rep=TRUE)))
#prepare cluster for pdredge
clust <- try(makeCluster(detectCores()-1))
#working model (30 or less predictors)
mod <- lm(X1 ~ ., data=var.30, na.action = "na.fail")
sub.dredge <- pdredge(mod, cluster=clust, eval=FALSE)
#Non-working model (31 or more predictors)
mod <- lm(X1 ~ ., data=var.31, na.action = "na.fail")
sub.dredge <- pdredge(mod, cluster=clust, eval=FALSE)
I know in 2016 that this was an issue with integer bit restrictions. However, from this question and the comments it received, I was under the impression that the issue was resolved and the maximum changed?
The 31 terms limit in dredge is pretty much ultimate. It will not be extended unless R implements native support for 64-bit integers.
(Also, update your MuMIn - this 'sprintf' error has been fixed some time ago)
There are actually only 16 parameters in the second question you reference, but some are called multiple times to represent interaction terms (though, whether that OP really wanted them to represent interactions, or intended for I(parameter^2), is unclear; if the latter, their code would have failed as there would have been too many unique parameters). So, even though there are many (~41) terms in that question, there are only 16 unique parameters.
As far as I can tell, #Kamil Bartoń has not updated dredge to accept more than 30 unique parameter calls yet.
I have installed Java 9.0.4 and all the relevant R libraries on macOS 10.13.4 to run the following script in R 3.5.0 (invoked in RStudio 1.1.423):
options("java.home"="/Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/Home/lib")
Sys.setenv(LD_LIBRARY_PATH='$JAVA_HOME/server')
dyn.load('/Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/Home/lib/server/libjvm.dylib')
library(mlr)
library(tidyverse) # for ggplot and data wrangly
library(ggvis) # ggplot visualisation in shiny app
library(rJava)
library(FSelector)
data <- read.csv('week07/PhishingWebsites.csv')
# All variables to nominal (PhishingWebsites)
data[c(1:31)] <- lapply(data[c(1:31)] , factor)
# Configure a classification task and specify Result as the target feature.
classif.task <- makeClassifTask(id = "web", data = data, target = "Result")
fv <- generateFilterValuesData(classif.task)
It works fine the first time I run it, but if I run it a second time I get the following error:
Error in randomForestSRC::rfsrc(getTaskFormula(task), data = getTaskData(task), :
An error has occurred in the grow algorithm. Please turn trace on for further analysis.
Any help much appreciated.
I am trying to calculate variable importance for a random forest built using the cforest function in the party package. I would like to run varimp with conditional set to TRUE, but I get an error message when I do so. The error reads:
Error in if (node[[5]][1] == variableID) cp <- node[[5]][[3]] :
argument is of length zero
Varimp run with the default setting conditional = FALSE works just fine.
Regarding the data set, all variables are categorical. The response variable is Glottal (yes/no), and there are seven predictors. Here is a link to the data, and here is the code I am using:
library(party)
glottal.df <-read.csv("~glottal_data.csv", header=T)
glottal.df$Instance <- factor(glottal.df$Instance)
data.controls <- cforest_unbiased(ntree = 500, mtry = 2)
set.seed(45)
glottal.cf <- cforest(Glottal ~ Stress + Boundary + Context + Instance + Region + Target + Speaker, data = glottal.df, controls = data.controls)
# this gives me an error
glottal.cf.varimp.true <- varimp(glottal.cf, conditional = TRUE)
# this works
glottal.cf.varimp.false <- varimp(glottal.cf)
Can anyone tell me why I am getting this error? It is not a problem with any specific variable as the problem persists even if I remove a variable, create a new forest and try to recalculate varimp, and there are no missing values in the data set. Many thanks in advance for your help!
Appears to be working with party 1.2.4:
> glottal.cf.varimp.true
Stress Boundary Context
0.0003412322 0.2405971564 0.0122369668
Instance Region Target
-0.0043507109 0.0044360190 -0.0011469194
Speaker
0.0384834123
> packageVersion('party')
[1] ‘1.2.4’
> R.version
_
platform x86_64-pc-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 4.3
year 2017
month 11
day 30
svn rev 73796
language R
version.string R version 3.4.3 (2017-11-30)
nickname Kite-Eating Tree
I have recently run into a problem running a GAM model from a previously working code. I believe it is related to an updated R-Version and an updated Version of the mgcv package. It would be great to know if anyone has the same problem or has a solution to it.
I am currently running R version 3.2.2 (2015-08-14) -- "Fire Safety"
on Windows. I am using the mgcv Package 1.8-7. Below is an example code that re-produces the error message, when run on my computer.
###Load package
library(mgcv)
This is mgcv 1.8-7.
###Simulate some example data
set.seed(2) ## simulate some data...
dat <- gamSim(1,n=400,dist="normal",scale=2)
###Run normal model
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat, family=gaussian())
This works.
###change the smoothness selection method to REML
b0 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="REML")
Gives the following error message:
Error in .C(C_gdi1, X = as.double(x[good, ]), E = as.double(Sr), Eb = as.double(Eb), : Incorrect number of arguments (48), expecting 47 for 'gdi1'
Thanks for your help!
I have re-installed R and the mgcv package and it seems as if this has resolved the issue.