Adaboosting in R with any classifier - r

There is an implementation of AdaBoosting algorithm in R. See this link, the function is called boosting.
The problem is that this package uses classification trees as a base or weak learner.
Is that possible to substitute the original weak learner to any other (e.g., SVM or Neural Networks) using this package?
If not, are there any examples of AdaBoosting implementation in R?
Many thanks!

Related

Looking for R package or other possibility in R for General Bayesian Network Classification

Hello stackoverflow community,
Im working on a uni-project in which we try to create Bayesian Network Classifier from data in R.
Ideally the classifier should be based on a General Bayesian Network (GNB) or a BN Augmented Naive Bayes(BAN).
Unfortunately Im yet to find a suitabel package to create either of those nets in R.
My research led me to the following two packages:
bnclassify, the most prominent package for BN classification, doesnt include GNBs or BANs at all.
bnlearn offers the possibility to learn GNBs but according to the creator the learning is focused on returning the correct dependence structure rather than maximizing the predictive accuracy for classification. I've tried to use them for my classification problem nonetheless but the result was underwhelming.
So my question is if anyone knows a R package to classify with GNBs or BANs
OR how to work with the GNBs fron bnlearn to improve their predictive accuracy for classification problems.
Thanks you for your help in advance.
Best Regards

Is it possible to build a random forest with model based trees i.e., `mob()` in partykit package

I'm trying to build a random forest using model based regression trees in partykit package. I have built a model based tree using mob() function with a user defined fit() function which returns an object at the terminal node.
In partykit there is cforest() which uses only ctree() type trees. I want to know if it is possible to modify cforest() or write a new function which builds random forests from model based trees which returns objects at the terminal node. I want to use the objects in the terminal node for predictions. Any help is much appreciated. Thank you in advance.
Edit: The tree I have built is similar to the one here -> https://stackoverflow.com/a/37059827/14168775
How do I build a random forest using a tree similar to the one in above answer?
At the moment, there is no canned solution for general model-based forests using mob() although most of the building blocks are available. However, we are currently reimplementing the backend of mob() so that we can leverage the infrastructure underlying cforest() more easily. Also, mob() is quite a bit slower than ctree() which is somewhat inconvenient in learning forests.
The best alternative, currently, is to use cforest() with a custom ytrafo. These can also accomodate model-based transformations, very much like the scores in mob(). In fact, in many situations ctree() and mob() yield very similar results when provided with the same score function as the transformation.
A worked example is available in this conference presentation:
Heidi Seibold, Achim Zeileis, Torsten Hothorn (2017).
"Individual Treatment Effect Prediction Using Model-Based Random Forests."
Presented at Workshop "Psychoco 2017 - International Workshop on Psychometric Computing",
WU Wirtschaftsuniversität Wien, Austria.
URL https://eeecon.uibk.ac.at/~zeileis/papers/Psychoco-2017.pdf
The special case of model-based random forests for individual treatment effect prediction was also implemented in a dedicated package model4you that uses the approach from the presentation above and is available from CRAN. See also:
Heidi Seibold, Achim Zeileis, Torsten Hothorn (2019).
"model4you: An R Package for Personalised Treatment Effect Estimation."
Journal of Open Research Software, 7(17), 1-6.
doi:10.5334/jors.219

Chain Classifiers in R

Is there any way to perform chain classification in multi-label classification problem. I have created a binary relevance model using mlr package which uses learners to achieve the same. But all the classification models in binary relevance are independent of each other and does not take into consideration the inter-dependencies of variables.
It would be really helpful if I can perform chain classification along with binary relevance method to improve my model.
We have multilabel classification with other algorithms like Classifier Chains now available in mlr, checkout the updated tutorial: http://mlr-org.github.io/mlr-tutorial/release/html/multilabel/index.html

Which decision-tree algorithm does the R package randomForest uses?

R has a package for random forests, named randomForest. Its manual can be found here. In the manual it is not mentioned which decision-tree growing algorithm is being used. Is it the ID3 algorithm? Is it something else?
Clarification: I am not asking about the meta-algorithm of the random forest itself. That meta algorithm uses base decision tree algorithm for each of the grown trees. For example, in Python's scikit-learn package, the tree algorithm which is being used is CART (as mentioned here).

SVM function in R

I am using R for classification problem. Does svm function in R support only binary classfication or supports multi class classification as welll?
svm (in package e1071) supports multi class classification using the ‘one-against-one’-approach. Same with ksvm (in kernlab).
The e1071 R package supports multi class classification using a "one-against-one-method".
Here are the classifications in this package:
v-classication: this model allows for more control over the number of support vectors (see Scholkopf et al., 2000) by specifying an additional parameter which approximates the fraction of support vectors;
One-class-classication: this model tries to find the support of a distribution and thus allows for outlier/novelty detection;
Multi-class classication: basically, SVMs can only solve binary classication problems. To allow for multi-class classication, libsvm uses the one-against-one technique by fitting all binary subclassiers and finding the correct class by a voting mechanism;

e-regression: here, the data points lie in between the two borders of the margin which is maximized under suitable conditions to avoid outlier inclusion;
Check https://cran.r-project.org/web/packages/e1071/vignettes/svmdoc.pdf

Resources