I am trying to fit a model to dfm I created using quanteda. I am getting the following error. Any ideas??
tModel <- textmodel(udfm1,model = "NB", smooth=1)
Error in textmodel(udfm1, model = "NB", smooth = 1) :
model NB not implemented.
p.s. I am creating a model to predict the next word for mobile application. I only know Naive Bayes and am not familiar with the other models in this package. So feel free to recommend.
Apologies for this: while the ?textmodel indicates that "NB" is an available model, in fact as of quanteda v0.9.1-7 it's not yet implemented. I have code that implements multinomial and Bernoulli Naive Bayes as a textmodel type but we moved it to a development branch pending more testing. (But coming soon.)
For predicting the next word, that sounds like a question for the text-mining tag of Cross-Validated. Nothing directly in quanteda yet for this, but you should be able to use the dfm directly with most classifiers and regression models.
Related
I am trying to model a train data with caret package's classifiers, but it does not respond for a very long time (I have waited for 2 hours). On the other hand, it works for other datasets.
Here is the link to my train data: http://www.htmldersleri.org/train.csv (It is well-known Reuters-21570 data set)
And the command I am using is:
model<-train(class~.,data=train,method="knn")
Note: for any other method (eg: svm, naive bayes, etc.), is stucks anyway.
Note 2: For package e1071, naiveBayes classifier works, but with 0,08% accuacy!
Can anyone tell me what can be the problem? Thanks in advance.
This seems to be multiclass classification problem. I'm not sure if caret supports that. However, I can show you how you would do the same thing with the mlr package
library(mlr)
x <- read.csv("http://www.htmldersleri.org/train.csv")
tsk <- makeClassifTask(data = x, target = 'class')
#Assess the performane with 10-fold cross-validation
crossval('classif.knn', tsk)
If you want to know which learners are integrated in mlr that support this kind of task, type
listLearners(tsk)
This is a follow-up to a previous question I asked a while back that was recently answered.
I have built several gbm models with dismo::gbm.step, which relies on the gbm fitting functions found in R package gbm, as well as cross validation tools from R package splines.
As part of my analysis, I would like to use some of the graphical tools available in R (e. g. perspective plots) to visualize pairwise interactions in the data. Both the gbm and the dismo packages have functions for detecting and modelling interactions in the data.
The implementation in dismo is explained in Elith et. al (2008) and returns a statistic which indicates departures of the model predictions from a linear combination of the predictors, while holding all other predictors at their means.
The implementation in gbm uses Friedman`s H statistic (Friedman & Popescue, 2005), and returns a different metric, and also does NOT set the other variables at their means.
The interactions modelled and plotted with dismo::gbm.interactions are great and have been very informative. However, I would also like to use gbm::interact.gbm, partly for publication strength and also to compare the results from the two methods.
If I try to run gbm::interact.gbm in a gbm.object created with dismo, an error is returned…
"Error in is.factor(data[, x$var.names[j]]) :
argument "data" is missing, with no default"
I understand dismo::gmb.step adds extra data the authors thought would be useful to the gbm model.
I also understand that the answer to my question lies somewherein the source code.
My questions is...
Is it possible to modify a gbm object created in dismo to be used in gbm::gbm.interact? If so, would this be accomplished by...
a. Modifying the gbm object created in dismo::gbm.step?
b. Modifying the source code for gbm::interact.gbm?
c. Doing something else?
I will be going through the source code trying to solve this myself, if I come up with a solution before anyone answers I will answer my own question.
The gbm::interact.gbm function requires data as an argument interact.gbm <- function(x, data, i.var = 1, n.trees = x$n.trees).
The dismo gbm.object is essentially the same as the gbm gbm.object, but with extra information attached so I don't imagine changing the gbm.object would help.
I have been unable to find information on how exactly predict.cv.glmnet works.
Specifically, when a prediction is being made are the predictions based on a fit that uses all the available data? Or are predictions based on a fit where some data has been discarded as part of the cross validation procedure when running cv.glmnet?
I would strongly assume the former but was unable to find a sentence in the documentation that clearly states that after a cross validation is finished, the model is fitted with all available data for a new prediction.
If I have overlooked a statement along those lines, I would also appreciate a hint on where to find this.
Thanks!
In the documentation for predict.cv.glmnet :
"This function makes predictions from a cross-validated glmnet model, using the stored "glmnet.fit" object ... "
In the documentation for cv.glmnet (under value):
"glmnet.fit a fitted glmnet object for the full data."
I have made SVM model using SVM package in R for a classification problem. I got only 87% accuracy. But random forest produces around 92.4%.
fit.svm<-svm(modelformula, data=training, gamma = 0.01, cost = 1,cross=5)
Would like to use boosting for tuning this SVM model. Can someone will help me to tune this SVM model?
What are the best parameters I can provide for SVM method?
Example for booting for SVM model.
To answer your first question.
The e1071 library in R has a built-in tune() function to perform CV. This will help you select the optimal parameters cost, gamma, kernel. You can also manipulate a SVM in R with the package kernlab. You may get different results from the 2 libraries. Let me know if you need any examples.
You may want to look into the caret package. It allows you to both pick various kernels for SVM (model list) and also run parameter sweeps to find the best model.
I am fitting a model regarding absence-presence data and I would like to check whether the random factor is significant or not.
To do this, one should compare a glmm with a glm and check with the LR-test which one is most significant, if I understand correct.
But if I perform an ANOVA(glm,glmm) , I get an analysis of Deviance Table and no output that compares the models.
How do I get the output that I desire, thus comparing both models?
Thanks in advance,
Koen
Somewhere you got the wrong impression about using anova() for this. Below re was fit using glmmPQL() in MASS package. fe was fit using glm() from base:
> anova(re,fe)
#Error in anova.glmmPQL(re, fe) : 'anova' is not available for PQL fits
That message appears to be the sole reason anova.glmmPQL() was created.
See this thread for verification and vague explanation:
https://stat.ethz.ch/pipermail/r-help/2002-July/022987.html
simply anova does not work with glmmPQL you need to use glmer from lme4 package to be able to use anova.