R. How to boost the SVM model - r

I have made SVM model using SVM package in R for a classification problem. I got only 87% accuracy. But random forest produces around 92.4%.
fit.svm<-svm(modelformula, data=training, gamma = 0.01, cost = 1,cross=5)
Would like to use boosting for tuning this SVM model. Can someone will help me to tune this SVM model?
What are the best parameters I can provide for SVM method?
Example for booting for SVM model.

To answer your first question.
The e1071 library in R has a built-in tune() function to perform CV. This will help you select the optimal parameters cost, gamma, kernel. You can also manipulate a SVM in R with the package kernlab. You may get different results from the 2 libraries. Let me know if you need any examples.

You may want to look into the caret package. It allows you to both pick various kernels for SVM (model list) and also run parameter sweeps to find the best model.

Related

random forest for imputation with hyperparameter optimization

I would like to impute my data using rfImpute() from randomForest CRAN package in R. However, I was wondering if it is also possible to optimize the hyperparameters 'niter' and 'ntree' and use the most optimal number for imputation on my data?
I saw that there is hyperparameter optimization for prediction and classification using randomforest, but is it also possible to do so for rfimpute()? :)
thanks in advance for any help,

How to run model diagnostics and validate binomial GAMs?

I'm looking for methods to test the overall fit of a model, run model diagnostics to help with model selection and methods for model validation for binomial GAMs.
If knows of any way to do use this using R that would be extremely helpful as well (i.e packages and functions). I have heard of DHARMa, but am at a loss of how I would use the package.
Any links with more information would also be appreciated.
Currently, all I have been able to do is ROC curves and AUC values.
Thanks

tuning svm parameters in R (linear SVM kernel)

what is the difference between tune.svm() and best.svm().
When we tune the parameters of svm kernel, aren't we expected to always choose the best values for our model.
Pardon as i am new to R and machine learning.
I noticed that there was no linear kernel option in tuning svm. Is there a possibility to tune my svm using a linear kernel
From ETHZ: best.svm() is really just a wrapper for tune.svm(...)$best.model. The
help page for tune() will tell you more on the available options.
Be sure to also go through the examples on the help page for tune(). e1071::svm offers linear, radial (the default), sigmoid and polynomial kernels, see help(svm). For example, to use the linear kernel the function call has to include the argument kernel = 'linear':
data(iris)
obj <- tune.svm(Species~., data = iris,
cost = 2^(2:8),
kernel = "linear")
If you are new to R and would like to train and cross validate SVM models you could also check the caret package and its train function which offers multiple types of kernels. The whole 'topics' section on that site might be of interest, too.

Cross validation on fitted survival objects?

I can see how cv.glm work with a glm object, but what about fitted survival models?
I have a bunch of models (Weibull, Gompertz, lognormal, etc). I want to assess the prediction error using cross validation. Which package/function can do this in R?
SuperLearner can do V-fold cross-validation for a large library of underlying machine learning algorithms, not sure that it includes survival models. Alternatively, take a look at the cvTools package, which is designed to help do cross-validation of any prediction algorithm you give it.

prediction intervals with caret

I've been using the caret package in R to run some boosted regression tree and random forest models and am hoping to generate prediction intervals for a set of new cases using the inbuilt cross-validation routine.
The trainControl function allows you to save the hold-out predictions at each of the n-folds, but I'm wondering whether unknown cases can also be predicted at each fold using the built-in functions, or whether I need to use a separate loop to build the models n-times.
Any advice much appreciated
Check the R package quantregForest, available at CRAN. It can easily calculate prediction intervals for random forest models. There's a nice paper by the author of the package, explaining the backgrounds of the method. (Sorry, I can't say anything about prediction intervals for BRT models; I'm looking for them by myself...)

Resources