random forest regression output calculation - r

Hi this is a purely theoretical question which i cant get my head around ( and could be completely wrong)
With random forest regressions - you grow n number of trees, each tree uses a subset of the data and in some cases a subset of the available variables to predict the dependent variable. the average of these n number of trees is taken to give us a predicted value. however, is there any need to look at the distribution of predictions at the individual tree level? are we able to obtain a number that provides some certainty of the overall predicted value? i would assume that a more consistent number being produced at the individual tree level would be preferred than a wide variety of numbers?
Thanks in advance

This method of determining variable importance has some drawbacks. For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Methods such as partial permutations and growing unbiased trees can be used to solve the problem. If the data contain groups of correlated features of similar relevance for the output, then smaller groups are favored over larger groups.

Related

Extracting linear term from a polynomial predictor in a GLM

I am relatively new to both R and Stack overflow so please bear with me. I am currently using GLMs to model ecological count data under a negative binomial distribution in brms. Here is my general model structure, which I have chosen based on fit, convergence, low LOOIC when compared to other models, etc:
My goal is to characterize population trends of study organisms over the study period. I have created marginal effects plots by using the model to predict on a new dataset where all covariates are constant except year (shaded areas are 80% and 95% credible intervals for posterior predicted means):
I am now hoping to extract trend magnitudes that I can report and compare across species (i.e. say a certain species declined or increased by x% (+/- y%) per year). Because I use poly() in the model, my understanding is that R uses orthogonal polynomials, and the resulting polynomial coefficients are not easily interpretable. I have tried generating raw polynomials (setting raw=TRUE in poly()), which I thought would produce the same fit and have directly interpretable coefficients. However, the resulting models don't really run (after 5 hours neither chain gets through even a single iteration, whereas the same model with raw=FALSE only takes a few minutes to run). Very simplified versions of the model (e.g. count ~ poly(year, 2, raw=TRUE)) do run, but take several orders of magnitude longer than setting raw=FALSE, and the resulting model also predicts different counts than the model with orthogonal polynomials. My questions are (1) what is going on here? and (2) more broadly, how can I feasibly extract the linear term of the quartic polynomial describing response to year, or otherwise get at a value corresponding to population trend?
I feel like this should be relatively simple and I apologize if I'm overlooking something obvious. Please let me know if there is further code that I should share for more clarity–I didn't want to make the initial post crazy long, but happy to show specific predictions from different models or anything else. Thank you for any help.

How to assess the model and prediction of random forest when doing regression analysis?

I know when random forest (RF) is used for classification, the AUC normally is used to assess the quality of classification after applying it to test data. However,I have no clue the parameter to assess the quality of regression with RF. Now I want to use RF for the regression analysis, e.g. using a metrics with several hundreds samples and features to predict the concentration (numerical) of chemicals.
The first step is to run randomForest to build the regression model, with y as continuous numerics. How can I know whether the model is good or not, based on the Mean of squared residuals and % Var explained? Sometime my % Var explained is negative.
Afterwards, if the model is fine and/or used straightforward for test data, and I get the predicted values. Now how can I assess the predicted values good or not? I read online some calculated the accuracy (formula: 1-abs(predicted-actual)/actual), which also makes sense to me. However, I have many zero values in my actual dataset, are there any other solutions to assess the accuracy of predicted values?
Looking forward to any suggestions and thanks in advance.
The randomForest R package comes with an importance function which can used to determine the accuracy of a model. From the documentation:
importance(x, type=NULL, class=NULL, scale=TRUE, ...), where x is the output from your initial call to randomForest.
There are two types of importance measurements. One uses a permutation of out of bag data to test the accuracy of the model. The other uses the GINI index. Again, from the documentation:
Here are the definitions of the variable importance measures. The first measure is computed from permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is recorded (error rate for classification, MSE for regression). Then the same is done after permuting each predictor variable. The difference between the two are then averaged over all trees, and normalized by the standard deviation of the differences. If the standard deviation of the differences is equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that case).
The second measure is the total decrease in node impurities from splitting on the variable, averaged over all trees. For classification, the node impurity is measured by the Gini index. For regression, it is measured by residual sum of squares.
For further information, one more simple importance check you may do, really more of a sanity check than anything else, is to use something called the best constant model. The best constant model has a constant output, which is the mean of all responses in the test data set. The best constant model can be assumed to be the crudest model possible. You may compare the average performance of your random forest model against the best constant model, for a given set of test data. If the latter does not outperform the former by at least a factor of say 3-5, then your RF model is not very good.

Need help in understanding the below plot generated using randomForestExplainer library in R

I am not able to understand what this plot depicts and what we mean by the relation between rankings.Let me add a few more things to make it a bit precise. This plot is generated by using plot_importance_rankings function available as part of randomForestExplainer package.
A sample code which generates this code is as
plot_importance_rankings(var_importance_frame)
where var_importance_frame contains important variables which we get as from
var_importance_frame <- measure_importance(rf_model)
Here rf_model is the trained random forest model. A sample example can be found at this link RandomForestExplainer - sample example
The randomForestExplainer package implements several measures to assess a given variable's importance in the random forests models. On this plot, you have mean_min_depth (average minimum depth of a variable across all trees), accuracy_decrease (accuracy loss by randomly permuting on a given variable), gini_decrease (average gain of purity by splitting on a given variable), no_of_nodes (number of nodes that split on a given variable across trees), times_a_root (number of times a given variable is used as the root of trees). Ideally you'd want these importance measures to be somewhat consistent, in that a variable measured as of high importance by one metric is also measured high by the others. What this plot is showing is this as a sanity check. In your case the variable importances are largely consistent and positively correlated. Each dot on the scatter plot represents a variable.

Simulating data using existing data and probability

I have measured multiple attributes (height, species, crown width, condition etc) for about 1500 trees in a city. Using remote sensing techniques I also have the heights for the rest of the 9000 trees in the city. I want to simulate/generate/estimate the missing attributes for these unmeasured trees by using their heights.
From the measured data I can obtain proportion of each species in the measured population (and thus a rough probability), height distributions for each species, height-crown width relationships for the species, species-condition relationship and so on. I want to use the height data for the unmeasured trees to first estimate the species and then estimate the rest of the attributes too using probability theory. So for a height of say 25m its more likely to be a Cedar (height range 5 - 30 m) rather than a Mulberry tree (height range 2 -8 m) and more likely to be a cedar (50% of population) than an oak (same height range but 2% of population) and hence will have a crown width of 10m and have a health condition of 95% (based on the distributions for cedar trees in my measured data). But also I am expecting some of the other trees of 25m to be given oak, just less frequently than cedar based on the proportion in population.
Is there a way to do this using probability theory in R preferably utilising Bayesian or machine learning methods?
Im not asking for someone to write the code for me - I am fairly experienced with R. I just want to be pointed in the right direction i.e. a package that does this kind of thing neatly.
Thanks!
Because you want to predict a categorical variable, i.e. the species, you should consider using a tree regression, a method which can be found in the R packages rpart and RandomForest. These models excel when you have a discrete number of categories and you need to slot your observations into those categories. I think those packages would work in your application. As a comparison, you can also look at multinomial regression (mnlogit, nnet, maxent) which can also predict categorical outcomes; unfortunately multinomial regression can get unwieldy with large numbers of outcomes and/or large datasets.
If you want to then predict the individual values for individual trees in your species, first run a regression of all of your measured variables, including species type, on the measured trees. Then take the categorical labels that you predicted and predict out-of-sample for the unmeasured trees where you use the categorical labels as predictors for the unmeasured variable of interest, say tree height. That way the regression will predict the average height for that species/dummy variable, plus some error and incorporating any other information you have on that out-of-sample tree.
If you want to use a Bayesian method, you consider using a hierarchical regression to model these out-of-sample predictions. Sometimes hierarchical models do better at predicting as they tend to be fairly conservative. Consider looking at the package Rstanarm for some examples.
I suggest you looking over Bayesian Networks with table CPDs over your random variables. This is a generative model that can handle missing data and do inference over casual relationships between variables. Bayesian Network structure can be specified by-hand or learned from data by a algorithm.
R has several implementations of Bayesian Networks with bnlearn being one of them: http://www.bnlearn.com/
Please see a tutorial on how to use it here: https://www.r-bloggers.com/bayesian-network-in-r-introduction/
For each species, the distribution of the other variables (height, width, condition) is probably a fairly simple bump. You can probably model the height and width as a joint Gaussian distribution; dunno about condition. Anyway with a joint distribution for variables other than species, you can construct a mixture distribution of all those per-species bumps, with mixing weights equal to the proportion of each species in the available data. Given the height, you can find the conditional distribution of the other variables conditional on height (and it will also be a mixture distribution). Given the conditional mixture, you can sample from it as usual: pick a bump with frequency equal to its mixing weight, and then sample from the selected bump.
Sounds like a good problem. Good luck and have fun.

How do I get individual tree probabilities from Random Forests in R?

I'm using the randomForest package in R on a classification problem (outcome is binary).
I want to get the probability output of each one of the trees (to get a prediction interval).
I've set the predict.all=TRUE argument in the predictions, but it gives me a matrix of 800 columns (=the number of trees in my forest) and each of them is a 1 or a 0. How do I get the probability output rather than class?
PS: the size of my nodes=1, which means that this should make sense. however, I changed the node size=50, still got all 0's and 1's no probabilities.
Here's what Im doing:
#build model (node size=1)
rf<-randomForest(y~. ,data=train, ntree=800,replace=TRUE, proximilty=TRUE, keep.inbag=TRUE)
#get the predictions
#store the predictions from all the trees
all_tree_train<-predict(rf, test, type="prob", predict.all= TRUE)$individual
This gives a matrix of 0's and 1's rather than probabilities.
I realise this question is old, but it might help anyone with a similar question.
If you query the trees for their results, you'll always get the end classifications which are deterministic given an initialised forest. You can extract the probabilities by setting predict all to TRUE as you've done and summing across the votes for a probability.
If you have more than 2 classes, the forest classifies an item 'm' as class 'x' with probability
(number of trees which bin m as x)/(number of trees)
As you only have a binary classification, the column sums of the prediction matrix give you the probability of being in class 1.
So the documentation for predict.randomForest states:
If predict.all=TRUE, then the individual component of the returned
object is a character matrix where each column contains the predicted
class by a tree in the forest.
...so it does not appear that it is possible to have a probability returned for each individual tree.
If you want something like a prediction interval for classification, you might try fitting a random forest with many more trees and then generating predictions from many different (random?) subsets of the forest.
One thing you need to be careful of though is that you appear to be feeding your training data to predict.randomForest. This will of course give you biased predictions, unless you use the information from the inbag component of the random forest object to only select trees on which that observation was out of bag.

Resources