R - Party package: is cforest really bagging? - r

I'm using the "party" package to create random forest of regression trees.
I've created a ForestControl class in order to limit my number of trees (ntree), of nodes (maxdepth) and of variables I use to fit a tree (mtry).
One thing I'm not sure of is if the cforest algo is using subsets of my training set for each tree it generates or not.
I've seen in the documentation that it is bagging so I assume it should. But I'm not sure to understand well what the "subset" input is in that function.
I'm also puzzled by the results I get using ctree: when plotting the tree, I see that all my variables of my training set are classified in the different terminal tree nodes while I would have exepected that it only uses a subset here too.
So my question is, is cforest doing the same thing as ctree or is it really bagging my training set?
Thanks in advance for you help!
Ben

Related

Is it possible to visualize an individual tree from a random forest obtained via tidymodels?

Good day,
for presentation purposes I would like to plot a couple of decision trees from a random forest (with about 100 trees). I found a post from last year where its clear is not really possible or there is not an function using tidymodels. R: Tidymodels: Is it possible to plot the trees for a random forest model in tidy models?
I´m wondering if somebody has found a way! I remember I could easily do this using the "Caret" package, but tidymodels makes everything so convenient I was hoping for someone with a solution.
Many thanks!
Summarizing what trees can be ploted with tidymodels based in comments comments and other Stackoverflow posts
Decision trees. There are some options but the function rpart.plot()seems to be the most popular.
Individual tree from a random forest. Doesn´t seem to be possible to plot one (yet) using the tidymodel environment. See this post: here
XGBoost models: See Julia comment:
You should be able to use a function like xgb.plot.tree() with a
trained tidymodels workflow or parsnip model by extracting out the
underlying object created with the xgboost engine. You can do this
with extract_fit_engine()

Decisional boundary SVM caret (R)

I have built an SVM-RBF model in R using Caret. Is there a way of plotting the decisional boundary?
I know it is possible to do so by using other R packages but unfortunately I’m forced to use the Caret package because this is the only package I found that allows me to calculate the variables importance.
In alternative, can you suggest a package that allows to plot the decision boundaries AND gives also the vars importance?
Thank you very much
First of all, unlike other methods, SVM does not produce feature importance. In your case, the importance score caret reports is calculated independent of the method itself: https://topepo.github.io/caret/variable-importance.html#model-independent-metrics
Second, the decision boundary (or hyperplane) you see in most textbook example is based on a toy problem with only two or three features. If you have more than three features, it is not trivial to visualize this hyperplane.

How can I plot a tree selected from the random forest created using "caret" package in R

I am newbie in R and I need to know how to plot a tree selected from a random forest training model created using the train () function in caret package.
First and foremost, I used a training dataset to create a fitting model of a random forest using the train() function. The created random forest contains about 500 trees. Is there any methodology to create a plot of a selected tree?
Thank you.
CRAN package party offers a method called prettyTree.
Look here
As far as I know, the randomForest package does not have any built-in functionality to plot individual trees. You can extract trees using the getTree() function, but nothing is provided to plot / visualize it. This question may be a duplicate as a quick search yielded approaches other people have used to extract trees from a random forest are found
here and here and here

What is the interpretation of the plot boxes of Logistic Model Tree (LMT) outcome in the RWeka package in r?

Im working on a user classification with 5 known groups (observations approximatly equally divided over groups). I have information about these users (like age, living area ...) and try to find the characteristics that identify the users in each group.
For this purpose I use the Rweka package in R (collection of machine learning algorithms: http://cran.r-project.org/web/packages/RWeka/RWeka.pdf). To find the characteristics that distinguish between my groups I use Logistic Model Trees (LMT). There is just little information about this function:
I will try to sketch an example of a plotted tree.
The splits are straight forward for interpretation, but in each terminal node there is a box filled with:
LM_24: 48/96
(20742)
What does this mean? How can I see in which of the five groups the node ends?
With what function can I retrieve the coefficients used in the model? Such that the influence of the variables can be studied.
(I did look into other methods for building trees on these data, but both the regression and classification tree packages (like rpart, party) only find one terminal note in my data, whilest the LMT function finds 6 split nodes)
I hope you can provide me the answer/some help with this function. Thanks a lot!

How to do feature selection with randomForest package?

I'm using randomForest in order to find out the most significant variables. I was expecting some output that defines the accuracy of the model and also ranks the variables based on their importance. But I am a bit confused now. I tried randomForest and then ran importance() to extract the importance of variables.
But then I saw another command rfcv (Random Forest Cross-Valdidation for feature selection), which should be the most appropriate for this purpose I suppose, but the question I have regarding this is: how to get the list of the most important variables? How to see the output after running it? Which command to use?
Another thing: What is the difference between randomForest and predict.randomForest?
I am not very familiar with randomforest and R therefore any help would be appreciated.
Thank you in advance!
After you have made a randomForest model you use predict.randomForest to use the model you created on new data e.g. build a random forest with training data then run your validation data through that model with predict.randomForest.
As for the rfcv there is an option recursive which (from the help):
whether variable importance is (re-)assessed at each step of variable
reduction
Its all in the help file

Resources