I'm attempting to run a cv DFA on a set of species X site data (species are columns, sites are rows) with a grouping row named "ZONE".
I'm using a stock script I've successfully used before, but now I'm getting a new error from the predict function that I can't make heads or tails of.
My code is simply:
data2.lda<-lda(ZONE~SP1+SP2+SP3+SP4+SP5+SP6+SP7+SP8+SP9+SP10+SP11+SP12+SP13+SP14+SP15
,data=data2.x, Cna.action="na.omit",CV=TRUE)
list(data2.lda)
data2.lda.p<-predict(data2.lda,newdata=data2.lda.x(,c[2:17]))$class
data2.lda.p
the error I receive is:
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "list"
My data are in the same form as previous uses of this code.
Where did I go wrong?
Any help is appreciated, thanks!
UPDATE: I've figured out that the issue involves the cross validation portion of the code. Are there additional rules for cross validating an LDA that I'm missing when it comes to coding in R?
Your problem is that predict requires a model object for its first argument. When you run lda with the CV=T option, it returns a list object not a model object. The lda documentation says
If CV = TRUE the return value is a list with components class, the MAP
classification (a factor), and posterior, posterior probabilities for
the classes.
Otherwise it is an object of class "lda" containing the
following components:
As per PCantalupo's answer, I managed to arrive at my goal. The cross validation procedure needs to be applied during the predict model, not the original model. The functional code is:
data2.lda<-lda(ZONE~SP1+SP2+SP3+SP4+SP5+SP6+SP7+SP8+SP9+SP10+SP11+SP12+SP13+SP14+SP15
,data=data2.x, Cna.action="na.omit")
list(data2.lda)
data2.lda.p<-predict(data2.lda,CV=TRUE,newdata=data1[c(2:17)])$class
data2.lda.p
tab<-table(data2.lda.p,data2[,1])
tab
summary(table(data2.lda.p,data2[,1]))
diag(prop.table(tab,1))
sum(diag(prop.table(tab)))
Related
'bst' is the name of an xgboost model that I built in R. It gives me predicted values for the test dataset using this code. So it is definitely an xgboost model.
pred.xgb <- predict(bst , xdtest) # get prediction in test sample
cor(ytestdata, pred.xgb)
Now, I would like to save the model so another can use the model with their data set which has the same predictor variables and the same variable to be predicted.
Consistent with page 4 of xgboost.pdf, the documentation for the xgboost package, I use the xgb.save command:
xgb.save(bst, 'xgb.model')
which produces the error:
Error in xgb.save(bst, "xgb.model") : model must be xgb.Booster.
Any insight would be appreciated. I searched the stack overflow and could not locate relevant advice.
Mike
It's hard to know exactly what's going on without a fully reproducible example. But just because your model can make predictions on the test data, it doesn't mean it's an xgboost model. It can be any type of model with a predict method.
You can try class(bst) to see the class of your bst object. It should return "xgb.Booster," though I suspect it won't here (hence your error).
On another note, if you want to pass your model to another person using R, you can just save the R object rather than exporting to binary, via:
save(bst, model.RData)
I am using every variation of code I can think of to run a multiple mediation using the mma package in R and I keep getting the same error.
I've used a ton of different variations, but this is the main bit of code I'm trying to run just to identify the mediators vs. covariates.
data.bin<-data.org(x,y,pred=2,mediator=c(1,7:11),alpha=.05,alpha2=.05)
Error: Must use a vector in [, not an object of class matrix.
Call rlang::last_error() to see a backtrace
pred is the data frame of predictor(s). It is separate from x, the data frame of covariates and mediators.
An older version uses pred to indicate the column number of predictor (exposure) in x. This has been changed.
I have been unable to find information on how exactly predict.cv.glmnet works.
Specifically, when a prediction is being made are the predictions based on a fit that uses all the available data? Or are predictions based on a fit where some data has been discarded as part of the cross validation procedure when running cv.glmnet?
I would strongly assume the former but was unable to find a sentence in the documentation that clearly states that after a cross validation is finished, the model is fitted with all available data for a new prediction.
If I have overlooked a statement along those lines, I would also appreciate a hint on where to find this.
Thanks!
In the documentation for predict.cv.glmnet :
"This function makes predictions from a cross-validated glmnet model, using the stored "glmnet.fit" object ... "
In the documentation for cv.glmnet (under value):
"glmnet.fit a fitted glmnet object for the full data."
I'm working on a function that takes as an input a list of dataframes and a list of corresponding models, and runs and sums up the predictions for each. My trouble comes in using the predict function itself on a particular model. Because neither the amount nor names of my models are constant, I cannot simply call
`predict(model_list$model1, df_list$df1)`
for each; instead, I was hoping to loop through each model and predict on each that way. However, when I try to run
`predict(model_list[1], df_list[1])`,
I run into an error: Error in eval(expr, envir, enclos) : object 'campaign_type' not found ("campaign_type" is the first variable in my df).
Indeed, while summary(model_list$model1) outputs the full summary of said model, summary(model_list[1]) only outputs the name, length, class, and mode of the model. I used caret to create the models, if that makes a difference. Any ideas here? Again, the goal is to create projections for each model on its corresponding dataframe and sum the projections up, within the function.
Thanks
I am trying to learn the R package 'rminer'. It provides three functions; 'fit' and 'predict' to basically fit and predict and a more powerful one which combines these two 'mining'. Further this package provides a function 'Importance()' to measure input importance including sensitivity analysis, and a piloting function to graph the result 'mgraph(model,graph="IMP",...)'
When I produced results with fit() function I can successfully use the Importance() and mgraph() functions.
However, when I use the mining() function I can only use mgraph() to visualize the measured input importance and I failed to produce results using Importance() function. I believe if I can plot the input importance results then I should be able to print/save their numeric values using Importance() though I'm receiving the following error
MM=mining(formula,data,model=models,Runs=Runs,method=v,mpar=m,search=s,task=task,feature=feature)
I=Importance(MM, data, method="1D-SA")
Error in Importance(MM, data, method = "1D-SA") :
duplicate 'switch' defaults: 'lm == func...' and 'NULL'
and I couldn't find many results through searching www to lead me a solution.
I really appreciate if you can guide me how to use the results of mining() function in Importance() if this is possible.
Thank you.
Son