I'm working with random forest models in R as a part of an independent research project. I have fit my random forest model and generated the overall importance of each predictor to the models accuracy. However, in order to interpret my results in a research paper, I need to understand whether the variables have a positive or negative impact on the response variable.
Is there a way to produce this information from a random forest model? I.e. I expect age to have a positive impact on the likelihood a surgical complication occurs, but existence of osteoarthritis not so much.
Code:
surgery.bagComp = randomForest(complication~ahrq_ccs+age+asa_status+bmi+baseline_cancer+baseline_cvd+baseline_dementia+baseline_diabetes+baseline_digestive+baseline_osteoart+baseline_psych+baseline_pulmonary,data=surgery,mtry=2,importance=T,cutoff=c(0.90,0.10)) #The cutoff is the probability for each group selection, probs of 10% or higher are classified as 'Complication' occurring
surgery.bagComp #Get stats for random forest model
imp=as.data.frame(importance(surgery.bagComp)) #Analyze the importance of each variable in the model
imp = cbind(vars=rownames(imp), imp)
imp = imp[order(imp$MeanDecreaseAccuracy),]
imp$vars = factor(imp$vars, levels=imp$vars)
dotchart(imp$MeanDecreaseAccuracy, imp$vars,
xlim=c(0,max(imp$MeanDecreaseAccuracy)), pch=16,xlab = "Mean Decrease Accuracy",main = "Complications - Variable Importance Plot",color="black")
Importance Plot:
Any suggestions/areas of research anyone can suggest would be greatly appreciated.
In order to interpret my results in a research paper, I need to understand whether the variables have a positive or negative impact on the response variable.
You need to be perform "feature impact" analysis, not "feature importance" analysis.
Algorithmically, it's about traversing decision tree data structures and observing what was the impact of each split on the prediction outcome. For example, consider the split "age <= 40". Does the left branch (condition evaluates to true) carry lower likelihood than the right branch (condition evaluates to false)?
Feature importances may give you a hint which features to look for, but it cannot be "transformed" to feature impacts.
You might find the following articles helpful: WHY did your model predict THAT? (Part 1 of 2) and WHY did your model predict THAT? (Part 2 of 2).
Related
I know when random forest (RF) is used for classification, the AUC normally is used to assess the quality of classification after applying it to test data. However,I have no clue the parameter to assess the quality of regression with RF. Now I want to use RF for the regression analysis, e.g. using a metrics with several hundreds samples and features to predict the concentration (numerical) of chemicals.
The first step is to run randomForest to build the regression model, with y as continuous numerics. How can I know whether the model is good or not, based on the Mean of squared residuals and % Var explained? Sometime my % Var explained is negative.
Afterwards, if the model is fine and/or used straightforward for test data, and I get the predicted values. Now how can I assess the predicted values good or not? I read online some calculated the accuracy (formula: 1-abs(predicted-actual)/actual), which also makes sense to me. However, I have many zero values in my actual dataset, are there any other solutions to assess the accuracy of predicted values?
Looking forward to any suggestions and thanks in advance.
The randomForest R package comes with an importance function which can used to determine the accuracy of a model. From the documentation:
importance(x, type=NULL, class=NULL, scale=TRUE, ...), where x is the output from your initial call to randomForest.
There are two types of importance measurements. One uses a permutation of out of bag data to test the accuracy of the model. The other uses the GINI index. Again, from the documentation:
Here are the definitions of the variable importance measures. The first measure is computed from permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is recorded (error rate for classification, MSE for regression). Then the same is done after permuting each predictor variable. The difference between the two are then averaged over all trees, and normalized by the standard deviation of the differences. If the standard deviation of the differences is equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that case).
The second measure is the total decrease in node impurities from splitting on the variable, averaged over all trees. For classification, the node impurity is measured by the Gini index. For regression, it is measured by residual sum of squares.
For further information, one more simple importance check you may do, really more of a sanity check than anything else, is to use something called the best constant model. The best constant model has a constant output, which is the mean of all responses in the test data set. The best constant model can be assumed to be the crudest model possible. You may compare the average performance of your random forest model against the best constant model, for a given set of test data. If the latter does not outperform the former by at least a factor of say 3-5, then your RF model is not very good.
NOTE: This question was originally posted on Cross Validated, where it was suggested that it should be asked in StackOverflow instead.
I am trying to model a 3-way repeated measures experiment, FixedFactorA * FixedFactorB * Time[days]. There are no missing observations, but my groups (FactorA * FactorB) are unequal (close, but not completely balanced). From reading online, the best way to model a repeated measures experiment in which observation order matters (due to the response mean and variance changing in a time-dependent way) and for unequal groups is to use a mixed model and specify an appropriate covariance structure. However, I am new to the idea of mixed models and I am confused as to whether I am using the correct syntax to model what I am trying to model.
I would like to do a full factorial analysis, such that I could detect significant time * factor interactions. For example, for subjects with FactorA = 1, their responses over time might have a different slope and/or intercept than subjects with FactorA =2. I also want to be able to check whether certain combinations of FactorA and FactorB have significantly different responses over time (hence the full three-way interaction term).
From reading online, it seems like AR1 is a reasonable covariance structure for longitudinal-like data, so I decided to try that. Also, I saw that one is supposed to use ML if one plans to compare two different models, so I chose that approach in anticipation of needing to fine-tune the model. It is also my understanding that the goal is to minimize the AIC during model selection.
This is the code in the log for what I tried in SPSS (for long-form data), which yielded an AIC of 2471:
MIXED RESPONSE BY FactorA FactorB Day
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0,
ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED=FactorA FactorB Day FactorA*FactorB FactorA*Day FactorB*Day FactorA*FactorB*Day | SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/REPEATED=Day | SUBJECT(Subject_ID) COVTYPE(AR1)
This is what I tried in R, which yielded an AIC of 2156:
require(nlme)
#output error fix: https://stats.stackexchange.com/questions/40647/lme-error-iteration-limit-reached
ctrl <- lmeControl(opt='optim') #I used this b/c otherwise I get the iteration limit reached error
fit1 <- lme(RESPONSE ~ Day*FactorA*FactorB, random = ~ Day|Subject_ID, control=ctrl,
correlation=corAR1(form=~Day), data, method="ML")
summary(fit1)
These are my questions:
The SPSS code above yielded a model with AIC = 2471, while the R code yielded a model with AIC = 2156. What is it about the codes that makes the models different?
From what I described above, are either of these models appropriate for what I am trying to test? If not, what would be a better way, and how would I do it in both programs to get the same results?
Edits
Another thing to note is that I didn't dummy-code my factors. I don't know if this is a problem for either software, or if the built-in coding is different in SPSS vs R. I also don't know if this will be a problem for my three-way interaction term.
Also, when I say "factor", I mean an unchanging group or characteristic (like "sex").
Start with an unconditional model, one with an identity variance-covariance structure at level-1 and one with an AR(1) var-covar structure at level 1:
unconditional.identity<-lme(RESPONSE~Day, random=~Day|Subject_ID, data=data, method='ML')
unconditional.ar1<-lme(RESPONSE~Day, random=~Day|Subject_ID, correlation=corAR1(form=~Day), data=data, method='ML')
Find the intra-class correlation coefficient of this unconditional model, which is the level-2 error divided by the sum of level-1 and level-2 errors. This is probably easier in a spreadsheet program, but in R:
intervals(unconditional.identity)$reStruct$Subject_ID[2]^2/(intervals(unconditional.identity)$reStruct$Subject_ID[2]^2+intervals(unconditional.identity)$sigma[2]^2)
intervals(unconditional.ar1)$reStruct$Subject_ID[2]^2/(intervals(unconditional.ar1)$reStruct$Subject_ID[2]^2+intervals(unconditional.ar1)$sigma[2]^2)
It depends on your field, but in educational research, an ICC below 0.2, definitely below 0.1, is considered not ready for hierarchical linear models. That is to say, multiple regression would be better because the assumption of independence is confirmed. If your ICC is below a cutoff for your field, then do not use a hierarchical longitudinal model.
If your ICC is acceptable for hierarchical linear models, then add in your control grouping variable with identity and AR(1) var-covar matrix:
conditional1.identity<-lme(RESPONSE~Day+Group, random=~Day+Group|Subject_ID, data=data, method='ML')
conditional1.ar1<-lme(RESPONSE~Day+Group, random=~Day+Group|Subject_ID, correlation=corAR1(form=~Day), data=data, method='ML')
If your factors are time-invariant (which you said on Cross Validated), then your model gets bigger because time and group are nested in these fixed effects:
conditional2.identity<-lme(RESPONSE~Day+Group+FactorA+FactorB+FactorA*Day+FactorB*Day+FactorA*Group+FactorB*Group+FactorB, random=~Day+Group|Subject_ID, data=data, method='ML')
conditional2.ar1<-lme(Day+Group+FactorA+FactorB+FactorA*Day+FactorB*Day+FactorA*Group+FactorB*Group+FactorB, random=~Day+Group|Subject_ID, correlation=corAR1(form=~Day), data=data, method='ML')
You can get confidence intervals on the coefficients with intervals() or p-values with summary(). Remember, lme reports error terms in standard deviation format.
I do not know your area of study, so I can't say if your three-way interaction effect makes theoretical sense. But your model is getting quite dense at this point. The more parameters you estimate, the more degrees of freedom the model has when you compare them, so the statistical significance will be biased. If you are really interested in a three-way interaction effect, I suggest you consider the theoretical meaning of such an interaction and what it would mean if such an interaction did occur. Nonetheless, you can estimate it by adding it to the code above:
conditional3.identity<-lme(RESPONSE~Day+Group+FactorA+FactorB+FactorA*Day+FactorB*Day+FactorA*Group+FactorB*Group+FactorB+Day*FactorA*FactorB, random=~Day+Group|Subject_ID, data=data, method='ML')
conditional3.ar1<-lme(Day+Group+FactorA+FactorB+FactorA*Day+FactorB*Day+FactorA*Group+FactorB*Group+FactorB+Day*FactorA*FactorB, random=~Day+Group|Subject_ID, correlation=corAR1(form=~Day), data=data, method='ML')
Finally, compare the nested models:
anova(unconditional.identity,conditional1.identity,conditional2.identity,conditional3.identity)
anova(unconditional.ar1,conditional1.ar1,conditional2.ar1,conditional3.ar1)
Like I said, the more parameters you estimate, the more biased your statistical significance will be: i.e., more parameters = more degrees of freedom = less chance of a statistically significant model.
HOWEVER, the best part about multi-level models is comparing effect sizes, so then you don't have to worry about p-values at all. Effect sizes are in the form of a "proportional reduction in variance explained."
This is comparing models. For example, to comapre the proportional reduction in variance explained in level 1 from the unconditional model to the conditional1 model:
(intervals(unconditional.identity)$sigma[2]^2 - intervals(conditional1.identity)$sigma[2]^2) / intervals(unconditional.identity)$sigma[2]^2
Hopefully you can "plug and play" the same code for the number of level-2 error terms you have (which is more than one in some of your cases). Make sure to compare only nested models in this way.
I’m trying to find a Feature Selection Package in R that can be used for Regression most of the packages implement their methods for classification using a factor or class for the response variable. In particular I’m interested if there’s a method using Random Forest for that purpose. Also a good paper in this field would be helpfull.
IIRC the randomForest package also does regression trees. You could start with the Breiman paper and go from there.
There are many ways you can use randomforest for calculating variable importance.
I. Mean Decrease Impurity (MDI) / Gini Importance :
This makes use of a random forest model or a decision tree. When training a tree, it is measured by how much each feature decreases the weighted impurity in a tree. For a forest, the impurity decrease from each feature can be averaged and the features are ranked according to this measure. Here is an example of the same using R.
fit <- randomForest(Target ~.,importance = T,ntree = 500, data=training_data)
var.imp1 <- data.frame(importance(fit, type=2))
var.imp1$Variables <- row.names(var.imp1)
varimp1 <- var.imp1[order(var.imp1$MeanDecreaseGini,decreasing = T),]
par(mar=c(10,5,1,1))
giniplot <- barplot(t(varimp1[-2]/sum(varimp1[-2])),las=2,
cex.names=1,
main="Gini Impurity Index Plot")
And the output will look like this: Gini Importance Plot
II. Permutation Importance or Mean Decrease in Accuracy (MDA) : Permutation Importance or Mean Decrease in Accuracy (MDA) is assessed for each feature by removing the association between that feature and the target. This is achieved by randomly permuting the values of the feature and measuring the resulting increase in error. The influence of the correlated features is also removed. Example in R:
fit <- randomForest(Target ~.,importance = T,ntree = 500, data=training_data)
var.imp1 <- data.frame(importance(fit, type=1))
var.imp1$Variables <- row.names(var.imp1)
varimp1 <- var.imp1[order(var.imp1$MeanDecreaseGini,decreasing = T),]
par(mar=c(10,5,1,1))
giniplot <- barplot(t(varimp1[-2]/sum(varimp1[-2])),las=2,
cex.names=1,
main="Permutation Importance Plot")
This two are are the ones which use Random Forest directly. There are some more easy to use metrics for variable importance calculation purpose. 'Boruta' method and Weight of evidence (WOE) and Information Value (IV) might also be helpful.
I want to exam which variable impacts most on the outcome, in my data, which is the stock yield. My data is like below.
And my code is also attached.
library(randomForest)
require(data.table)
data = fread("C:/stockcrazy.csv")
PEratio <- data$offeringPE/data$industryPE
data_update <- data.frame(data,PEratio)
train <- data_update[1:47,]
test <- data_update[48:57,]
For the above subset data set train and test, I am not sure if I need to do a cross validation on this data. And I don't know how to do it.
data.model <- randomForest(yield ~ offerings + offerprice + PEratio + count + bingo
+ purchase , data=train, importance=TRUE)
par(mfrow = c(1, 1))
varImpPlot(data.model, n.var = 6, main = "Random Forests: Top 6 Important Variables")
importance(data.model)
plot(data.model)
model.pred <- predict(data.model, newdata=test)
model.pred
d <- data.frame(test,model.pred)
I am sure not sure if the result of IncMSE is good or bad. Can someone interpret this?
Additionally, I found the predicted values of the test data is not a good prediction of the real data. So how can I improve this?
Let's see. Let's start with %IncMSE:
I found this really good answer on cross validated about %IncMSE which I quote:
if a predictor is important in your current model, then assigning
other values for that predictor randomly but 'realistically' (i.e.:
permuting this predictor's values over your dataset), should have a
negative influence on prediction, i.e.: using the same model to
predict from data that is the same except for the one variable, should
give worse predictions.
So, you take a predictive measure (MSE) with the original dataset and
then with the 'permuted' dataset, and you compare them somehow. One
way, particularly since we expect the original MSE to always be
smaller, the difference can be taken. Finally, for making the values
comparable over variables, these are scaled.
This means that in your case the most important variable is purchase i.e. when the variable purchase was permuted (i.e. the order of the values randomly changed) the resulting model was 12% worse than having the variable in its original order in terms of calculating the mean square error. The MSE was 12% higher using a permuted purchase variable meaning that the this variable is the most important. Variable importance is just a measure of how important your predictor variables were in the model you used. In your case purchase was the most important and P/E ratio was the least (for those 6 variables). This is not something you can interpret as good or bad because it doesn't show you how well the model fits unseen data. I hope this is clear now.
For the cross-validation:
You do not need to do a cross validation during the training phase because it happens automatically. Approximately, 2/3 of the records are used for the creation of a tree and the 1/3 that is left out (out-of-bag data) is used to assess the tree afterwards (the R squared for the tree is computed using the oob data)
As for the improvement of the model:
By showing just the 10 first lines of the predicted and the actual values of yield, you cannot make a safe decision on whether the model is good or bad. What you need is a test of fitness. The most common one is the R squared. It is simplistic but for comparing models and getting a first opinion about your model it does its job. This is calculated by the model for every tree that you make and can be accessed by data.model$rsq. This ranges from 0 to 1 with 1 being the perfect model and 0 showing really poor fit ( it can sometimes even take negative values which shows a bad fit). If your rsq is bad then you can try the following to improve your model although it is not certain that you will get the results you wish for:
Calibrate your trees in a different way. Change the number of trees grown and prune the trees by specifying a big nodesize number. (here you use the default 500 trees and a nodesize of 5 which might overfit your model.)
Increase the number of variables if possible.
Choose a different model. There are cases were a random Forest would not work well
I want to determine the marginal effects of each dependent variable in a probit regression as follows:
predict the (base) probability with the mean of each variable
for each variable, predict the change in probability compared to the base probability if the variable takes the value of mean + 1x standard deviation of the variable
In one of my regressions, I have a multiplicative variable, as follows:
my_probit <- glm(a ~ b + c + I(b*c), family = binomial(link = "probit"), data=data)
Two questions:
When I determine the marginal effects using the approach above, will the value of the multiplicative term reflect the value of b or c taking the value mean + 1x standard deviation of the variable?
Same question, but with an interaction term (* and no I()) instead of a multiplicative term.
Many thanks
When interpreting the results of models involving interaction terms, the general rule is DO NOT interpret coefficients. The very presence of interactions means that the meaning of coefficients for terms will vary depending on the other variate values being used for prediction. The right way to go about looking at the results is to construct a "prediction grid", i.e. a set of values that are spaced across the range of interest (hopefully within the domain of data support). The two essential functions for this process are expand.grid and predict.
dgrid <- expand.grid(b=fivenum(data$b)[2:4], c=fivenum(data$c)[2:4]
# A grid with the upper and lower hinges and the medians for `a` and `b`.
predict(my_probit, newdata=dgrid)
You may want to have the predictions on a scale other than the default (which is to return the linear predictor), so perhaps this would be easier to interpret if it were:
predict(my_probit, newdata=dgrid, type ="response")
Be sure to read ?predict and ?predict.glm and work with some simple examples to make sure you are getting what you intended.
Predictions from models containing interactions (at least those involving 2 covariates) should be thought of as being surfaces or 2-d manifolds in three dimensions. (And for 3-covariate interactions as being iso-value envelopes.) The reason that non-interaction models can be decomposed into separate term "effects" is that the slopes of the planar prediction surfaces remain constant across all levels of input. Such is not the case with interactions, especially those with multiplicative and non-linear model structures. The graphical tools and insights that one picks up in a differential equations course can be productively applied here.