SVM function in R - r

I am using R for classification problem. Does svm function in R support only binary classfication or supports multi class classification as welll?

svm (in package e1071) supports multi class classification using the ‘one-against-one’-approach. Same with ksvm (in kernlab).

The e1071 R package supports multi class classification using a "one-against-one-method".
Here are the classifications in this package:
v-classication: this model allows for more control over the number of support vectors (see Scholkopf et al., 2000) by specifying an additional parameter which approximates the fraction of support vectors;
One-class-classication: this model tries to find the support of a distribution and thus allows for outlier/novelty detection;
Multi-class classication: basically, SVMs can only solve binary classication problems. To allow for multi-class classication, libsvm uses the one-against-one technique by fitting all binary subclassiers and finding the correct class by a voting mechanism;

e-regression: here, the data points lie in between the two borders of the margin which is maximized under suitable conditions to avoid outlier inclusion;
Check https://cran.r-project.org/web/packages/e1071/vignettes/svmdoc.pdf

Related

How to implement F1 score in LightGBM for multi-class classification in R?

I am using the LightGBM package in R to create my model. I have already defined a function that calculates macro-F1 (defined as the average of F1s throughout all class predictions). I need to report CV macro-F1, so I would like to embed this score into lgb.cv. Nevertheless, the metric is not available in the package implementation, the only solution that I have seen in R is implemented in a binary classification setting (https://rpubs.com/dalekube/LightGBM-F1-Score-R), and most of the other answers are applied in Python.
My options are:
Implement macro-F1 in lgb.cv, which is what I do not know.
Apply my macro-F1 function to a manual cross-validation function that I have created and apply both to lgb.train (although I think that this might not be as optimized as lgb.cv).
Switch to Python

Preventing underforecasting of support vector regression in R

I'm currently using the e1071 package in R to forecast product demand using support vector regression via the svm function in the package. While support vector regression yields much higher forecast accuracy for my data compared to other methods (e.g. ARIMA, simple exponential smoothing), my results show that the svm function tends to underforecast. In my particular case, underforecasting is worse and much more expensive than overforecasting. Therefore, I want to implement something in R to tells support vector regression to penalize underforecasting much more than overforecasting.
Unfortunately, I can't really find any possibility to do this. There seems to be nothing on this in the e1071 package. The kernlab package has a support vector function (ksvm) that implements an 'eps-bsvr bound-constraint svm regression' but I can't find any information what is meant by bound-constraint or how to define that bound.
Has anyone seen any examples how to do this in R? I'm only finding very mathematical papers on asymmetric loss functions for support vector regression, and I don't have the skills to translate this into R code, so i'm looking for an already existing solution in R.

Decisional boundary SVM caret (R)

I have built an SVM-RBF model in R using Caret. Is there a way of plotting the decisional boundary?
I know it is possible to do so by using other R packages but unfortunately I’m forced to use the Caret package because this is the only package I found that allows me to calculate the variables importance.
In alternative, can you suggest a package that allows to plot the decision boundaries AND gives also the vars importance?
Thank you very much
First of all, unlike other methods, SVM does not produce feature importance. In your case, the importance score caret reports is calculated independent of the method itself: https://topepo.github.io/caret/variable-importance.html#model-independent-metrics
Second, the decision boundary (or hyperplane) you see in most textbook example is based on a toy problem with only two or three features. If you have more than three features, it is not trivial to visualize this hyperplane.

Adaboosting in R with any classifier

There is an implementation of AdaBoosting algorithm in R. See this link, the function is called boosting.
The problem is that this package uses classification trees as a base or weak learner.
Is that possible to substitute the original weak learner to any other (e.g., SVM or Neural Networks) using this package?
If not, are there any examples of AdaBoosting implementation in R?
Many thanks!

parameter C. epsilon as vector in kernlab's ksvm in R

I am trying to use ksvm function of kernlab package in R for epsilon-SVM regression. I want to put parameters C(regularization constant) and epsilon (insensitivity) as vectors(length of vector = training data length). But I am not able to figure out how to do this. Please suggest some way.
Why do you assume that you can do it? According to documentation of ksvm you can only weight classes, not particular samples. Such modification is accessible in for example sklearn python library (as samples' weights).
To artificialy implement per samples C-weights you could oversample your data. It will be very inefficient (especially if you have large differences in C values), but it can be applied to almost any SVM library.

Resources