When I run this line of code, I get the error message:
chart.RiskReward(maxret, risk.col = "StdDev", return.col = "mean",
chart.assets = "False")
chart.EfficientFrontier(maxret, match.col="StdDev", n.portfolios=100, type="l", tangent.line=FALSE)
Error in seq.default(from = minret, to = maxret, length.out = n.portfolios) :
'from' must be a finite number
Related
I am running the parallel analysis with fa.parallel which works but the problem is that it provides or suggests a number of factors lower (3) than what I would expect (5):
fa.parallel(test3[, c(7:28)], fm="ml",sim=TRUE,n.iter = 100)
The answer I get in the R console:
Parallel analysis suggests that the number of factors = 3 and the number of components = 3
And a graph.
But how can I further see eigenvalues?
Secondly, as I tried another way to run the parallel analysis using paran package, it does not compute the parallel analysis instead gives me the error message:
Error in svd(X) : infinite or missing values in 'x'.
I have tried to search for this error message which I did not find in context of parallel analysis but in PCA and it has to do with missing values which I acknowledge I have in my dataset. What should I do? the code used for paran is:
paran(test2[, c(7:28)], iterations = 5000, centile = 0, quietly = FALSE,
status = TRUE, all = TRUE, cfa = TRUE, graph = TRUE, color = TRUE,
col = c("black", "red", "blue"), lty = c(1, 2, 3), lwd = 1, legend = TRUE,
file = "", width = 640, height = 640, grdevice = "png", seed = 0)
I am trying to run a line of code from a manual but each time I get this error Error in symbols(x = coords[, 1], y = coords[, 2], bg = vertex.color, : invalid symbol coordinates
Now, this is my code
LGHomDf_P1 <- bridgeHomologues(pseudohom_stack = P1_homologues_5,
linkage_df = SN_DN_P1,
LOD_threshold = 5,
automatic_clustering = TRUE,
LG_number = 7,
parentname = "P1",
log = "Logfile_tetra.Rmd")
I have been reading some post on the forum but I could not find any information.
Moreover, I do not think I have to use igraph because it is not mentioned at all in the manual.
Moreover the package that I am using is called PolyMappeR.
I am trying to use the gbm package within r, and am having problems with the summary function. Hoping that someone can help out. My code is as follows:
library(ISLR)
Caravan$Purchase <- ifelse(Caravan$Purchase == "Yes", 1, 0)
train_index <- 1:1000
train <- Caravan[train_index, ]
test <- Caravan[-train_index, ]
library(gbm)
set.seed(1234)
boost <- gbm(Purchase ~ ., data = train, n.trees=400, shrinkage =0.01,
distribution = "bernoulli")
summary(boost)
I get the following error message with traceback:
Error in plot.window(xlim, ylim, log = log, ...) : need finite 'xlim' values
5.
plot.window(xlim, ylim, log = log, ...)
4.
barplot.default(rel.inf[i[cBars:1]], horiz = TRUE, col = rainbow(cBars, start = 3/6, end = 4/6), names = object$var.names[i[cBars:1]], xlab = "Relative influence", ...)
3.
barplot(rel.inf[i[cBars:1]], horiz = TRUE, col = rainbow(cBars, start = 3/6, end = 4/6), names = object$var.names[i[cBars:1]], xlab = "Relative influence", ...)
2.
summary.gbm(boost)
1.
summary(boost)
I have tried using the workaround here: http://www.samuelbosch.com/2015/09/workaround-ntrees-is-missing-in-r.html to no avail.
Any suggestions?
EDIT 1: Confirmed error occurs in R 3.5.0, but does not occur in 3.4.4.
I am currently running an ensemble niche model analyses through a Linux cluster in a CentOs6 environment. The package I am using is SSDM. My code is as follows:
Env <- load_var(path = getwd(), files = NULL, format = c(".grd", ".tif", ".asc",
".sdat", ".rst", ".nc", ".envi", ".bil", ".img"), categorical = "af_anthrome.asc",
Norm = TRUE, tmp = TRUE, verbose = TRUE, GUI = FALSE)
Env
head(Env)
warnings()
Occurrences <- load_occ(path = getwd(), Env, file =
"Final_African_Bird_occurrence_rarefied_points.txt",
Xcol = "decimallon", Ycol = "decimallat", Spcol =
"species", GeoRes = FALSE,
sep = ",", verbose = TRUE, GUI = FALSE)
head(Occurrences)
warnings()
SSDM <- stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA",
"MAXENT", "ANN", "SVM"), Occurrences, Env, Xcol = "decimallon",
Ycol = "decimallat", Pcol = NULL, Spcol = "species", rep
= 1,
name = "Stack", save = TRUE, path = getwd(), PA = NULL,
cv = "holdout", cv.param = c(0.75, 1), thresh = 1001,
axes.metric = "Pearson", uncertainty = TRUE, tmp = TRUE,
ensemble.metric = c("AUC", "Kappa", "sensitivity", "specificity"), ensemble.thresh = c(0.75, 0.75, 0.75, 0.75), weight = TRUE,
method = "bSSDM", metric = "SES", range = NULL,
endemism = NULL, verbose = TRUE, GUI = FALSE, cores = 125)
save.stack(SSDM, name = "Bird", path = getwd(),
verbose = TRUE, GUI = FALSE)
When running the stack_modelling function I get this Error message:
Error in checkForRemoteErrors(val) :
125 nodes produced errors; first error: comparison of these types is not
implemented
Calls: stack_modelling ... clusterApply -> staticClusterApply ->
checkForRemoteErrors
In addition: Warning message:
In stack_modelling(c("GLM", "GAM", "MARS", "GBM", "RF", "CTA", "MAXENT", :
It seems you attributed more cores than your CPU have !
Execution halted
Error in unserialize(node$con) : error reading from connection
Calls: <Anonymous> ... doTryCatch -> recvData -> recvData.SOCKnode ->
unserialize
In addition: Warning message:
In eval(e, x, parent.frame()) :
Incompatible methods ("Ops.data.frame", "Ops.factor") for "=="
Execution halted
I understand that I may have attributed more cores than I have access to but this same error message crops up when I use a fraction of the cores. I am not entirely sure what this error message is trying to tell me or how to fix it as I am new to working on a cluster. Is it a problem with parallel processing of the data? Is there a line of code which can help me fix this issue?
Thanks
QUESTION: The following reproducable example yields the error:
Error in optim(start, loglikCopula, lower = lower, upper = upper,
method = method, : initial value in 'vmmin' is not finite
I couldn't find out how to overcome this problem.
To reproduce my data set: (simply copy paste and run the following code:)
#REPRODUCABLE DATA SET
###############################################
library(MASS)
library(stats)
sim.cop<- plackettCopula(param = 10)
set.seed(1)
u<-rCopula(n = 800,copula = sim.cop)
V1<-qt(p = u[,col=1],df = 3.5)
V1<-0.013*V1+0.0004
V2<-qt(p = u[,col=2],df = 3.5)
V2<-0.013*V2+0.0004
m1<-fitdistr(x = V1,densfun = "normal")
m2<-fitdistr(x = V2,densfun = "normal")
V1.u<- pnorm(q = V1,mean = m1$estimate["mean"] ,sd = m1$estimate["sd"])
V2.u<- pnorm(q = V2,mean = m2$estimate["mean"] ,sd = m2$estimate["sd"])
data.u<-cbind(V1.u,V2.u)
The command which yields the error
#fit copula
###############################################
library(copula)
cop<-normalCopula(param = 0.5,dim = 2)
fitting<-fitCopula(copula = cop,data = data.u,method = "ml")