I tried fitting a gradient boosted model (weak learners are max.depth = 2 trees) to the iris data set using gbm in the gbm package. I set the number of iterations to M = 1000 with a learning rate of learning.rate = 0.001. I then compared the results to those of a regression tree (using rpart). However, it seems that the regression tree is outperforming the gradient boosted model. What's the reason behind this? And how can I improve the gradient boosted model's performance? I thought a learning rate of 0.001 should suffice with 1000 iterations/boosted trees.
library(rpart)
library(gbm)
data(iris)
train.dat <- iris[1:100, ]
test.dat <- iris[101:150, ]
learning.rate <- 0.001
M <- 1000
gbm.model <- gbm(Sepal.Length ~ ., data = train.dat, distribution = "gaussian", n.trees = M,
interaction.depth = 2, shrinkage = learning.rate, bag.fraction = 1, train.fraction = 1)
yhats.gbm <- predict(gbm.model, newdata = test.dat, n.trees = M)
tree.mod <- rpart(Sepal.Length ~ ., data = train.dat)
yhats.tree <- predict(tree.mod, newdata = test.dat)
> sqrt(mean((test.dat$Sepal.Length - yhats.gbm)^2))
[1] 1.209446
> sqrt(mean((test.dat$Sepal.Length - yhats.tree)^2))
[1] 0.6345438
In the iris dataset, there are 3 different species, first 50 rows are setosa, next 50 are versicolor and last 50 are virginica. So I think it's better if you mix the rows, and also make the Species column relevant.
library(ggplot2)
ggplot(iris,aes(x=Sepal.Width,y=Sepal.Length,col=Species)) + geom_point()
Secondly, you should do this over different a few replicates to see its uncertainty. For this we can use caret, and we can define the training samples before hand and also provide a fixed grid. What we are interested in, is the error during the training with cross-validation, which is similar to what you are doing:
set.seed(999)
idx = split(sample(nrow(iris)),1:nrow(iris) %% 3)
tr = trainControl(method="cv",index=idx)
this_grid = data.frame(interaction.depth=2,shrinkage=0.001,
n.minobsinnode=10,n.trees=1000)
gbm_fit = train(Sepal.Width ~ . ,data=iris,method="gbm",
distribution="gaussian",tuneGrid=tg,trControl=tr)
Then we use the same samples to fit rpart:
#the default for rpart
this_grid = data.frame(cp=0.01)
rpart_fit = train(Sepal.Width ~ . ,data=iris,method="rpart",
trControl=tr,tuneGrid=this_grid)
Finally we compare them, and they are very similar:
gbm_fit$resample
RMSE Rsquared MAE Resample
1 0.3459311 0.5000575 0.2585884 0
2 0.3421506 0.4536114 0.2631338 1
3 0.3428588 0.5600722 0.2693837 2
RMSE Rsquared MAE Resample
1 0.3492542 0.3791232 0.2695451 0
2 0.3320841 0.4276960 0.2550386 1
3 0.3284239 0.4343378 0.2570833 2
So I suspect there's something weird in the example above. Again it always depend on your data, for some data like for example iris, rpart might be good enough because there are very strong predictors. Also for complex models like gbm, you most likely need to train using something like the above to find the optimal parameters.
Related
Here is a simple reproducible example of my dilemma.
My goal is very simple - train a random forest model using 'ranger' and run the train data through the trained model to double check against the model's predicted values. The 2 predicted values do not match. In fact, running the train data through the trained model produces the actual train target data.
Persons have posted related questions to the forum, though I have yet to discover a definitive response.
This makes no sense to me. Exposing any trained model to the data from the training set should provide consistent solutions.
library(tidyverse)
library(ranger)
train <- tibble(target = ifelse( runif( 1000) > 0.5, 1, 0 ),
feature1 = runif( 1000),
feature2 = runif( 1000)
)
# Train the model
rf <- ranger(
target ~ .,
data = train,
classification = TRUE,
num.trees = 500,
seed = 123
)
# Obtain the in-sample model predictions
in_sample_predictions_2 <- rf$predictions
# Alternatively, run the train data through the trained model to ensure the results are the same as provided by 'in_sample_predictions_2'
in_sample_predictions_1 <- predict( rf, data = train )$predictions
# Check for equivalency fails, Sum of Squared differences should equal 0
sum( (in_sample_predictions_1 - in_sample_predictions_2 )^2 )
# [1] 506
# Yet, the incorrect predictions exactly equals the OOB error rate = 50.6% = 506/1000
# It turns out that the predicted values from the trained model using the train data is equivalent to the actual train 'target' data.
sum( (train$target - in_sample_predictions_1)^2 )
# [1] 0
I have a dataset that contains information about patients. It includes several variables and their clinical status (0 if they are healthy, 1 if they are sick).
I have tried to implement an SVM model to predict patient status based on these variables.
library(e1071)
Index <-
order(Ytrain, decreasing = FALSE)
SVMfit_Var <-
svm(Xtrain[Index, ], Ytrain[Index],
type = "C-classification", gamma = 0.005, probability = TRUE, cost = 0.001, epsilon = 0.1)
preds1 <-
predict(SVMfit_Var, Xtest, probability = TRUE)
preds1 <-
attr(preds1, "probabilities")[,1]
samples <- !is.na(Ytest)
pred <- prediction(preds1[samples],Ytest[samples])
AUC<-performance(pred,"auc")#y.values[[1]]
prediction <- predict(SVMfit_Var, Xtest)
xtab <- table(Ytest, prediction)
To test the performance of the model, I have calculated the ROC AUC, and with the validation set I obtain an AUC = 0.997.
But when I view the predictions, all the patients have been assigned as healthy.
AUC = 0.997
> xtab
prediction
Ytest 0 1
0 72 0
1 52 0
Can anyone help me with this problem?
Did you look at the probabilities versus the fitted values? You can read about how probability works with SVM here.
If you want to look at the performance you can use the library DescTools and the function Conf or with the library caret and the function confusionMatrix. (They provide the same output.)
library(DescTools)
library(caret)
# for the training performance with DescTools
Conf(table(SVMfit_Var$fitted, Ytrain[Index]))
# svm.model$fitted, y-values for training
# training performance with caret
confusionMatrix(SVMfit_Var$fitted, as.factor(Ytrain[Index]))
# svm.model$fitted, y-values
# if y.values aren't factors, use as.factor()
# for testing performance with DescTools
# with `table()` in your question, you must flip the order:
# predicted first, then actual values
Conf(table(prediction, Ytest))
# and for caret
confusionMatrix(prediction, as.factor(Ytest))
Your question isn't reproducible, so I went through this with iris data. The probability was the same for every observation. I included this, so you can see this with another data set.
library(e1071)
library(ROCR)
library(caret)
data("iris")
# make it binary
df1 <- iris %>% filter(Species != "setosa") %>% droplevels()
# check the subset
summary(df1)
set.seed(395) # keep the sample repeatable
tr <- sample(1:nrow(df1), size = 70, # 70%
replace = F)
# create the model
svm.fit <- svm(df1[tr, -5], df1[tr, ]$Species,
type = "C-classification",
gamma = .005, probability = T,
cost = .001, epsilon = .1)
# look at probabilities
pb.fit <- predict(svm.fit, df1[-tr, -5], probability = T)
# this shows EVERY row has the same outcome probability distro
pb.fit <- attr(pb.fit, "probabilities")[,1]
# look at performance
performance(prediction(pb.fit, df1[-tr, ]$Species), "auc")#y.values[[1]]
# [1] 0.03555556 that's abysmal!!
# test the model
p.fit = predict(svm.fit, df1[-tr, -5])
confusionMatrix(p.fit, df1[-tr, ]$Species)
# 93% accuracy with NIR at 50%... the AUC score was not useful
# check the trained model performance
confusionMatrix(svm.fit$fitted, df1[tr, ]$Species)
# 87%, with NIR at 50%... that's really good
I am trying to write my own gradient boosting algorithm. I understand there are existing packages like gbm and xgboost, but I wanted to understand how the algorithm works by writing my own.
I am using the iris data set, and my outcome is Sepal.Length (continuous). My loss function is mean(1/2*(y-yhat)^2) (basically the mean squared error with 1/2 in front), so my corresponding gradient is just the residual y - yhat. I'm initializing the predictions at 0.
library(rpart)
data(iris)
#Define gradient
grad.fun <- function(y, yhat) {return(y - yhat)}
mod <- list()
grad_boost <- function(data, learning.rate, M, grad.fun) {
# Initialize fit to be 0
fit <- rep(0, nrow(data))
grad <- grad.fun(y = data$Sepal.Length, yhat = fit)
# Initialize model
mod[[1]] <- fit
# Loop over a total of M iterations
for(i in 1:M){
# Fit base learner (tree) to the gradient
tmp <- data$Sepal.Length
data$Sepal.Length <- grad
base_learner <- rpart(Sepal.Length ~ ., data = data, control = ("maxdepth = 2"))
data$Sepal.Length <- tmp
# Fitted values by fitting current model
fit <- fit + learning.rate * as.vector(predict(base_learner, newdata = data))
# Update gradient
grad <- grad.fun(y = data$Sepal.Length, yhat = fit)
# Store current model (index is i + 1 because i = 1 contain the initialized estiamtes)
mod[[i + 1]] <- base_learner
}
return(mod)
}
With this, I split up the iris data set into a training and testing data set and fit my model to it.
train.dat <- iris[1:100, ]
test.dat <- iris[101:150, ]
learning.rate <- 0.001
M = 1000
my.model <- grad_boost(data = train.dat, learning.rate = learning.rate, M = M, grad.fun = grad.fun)
Now I calculate the predicted values from my.model. For my.model, the fitted values are 0 (vector of initial estimates) + learning.rate * predictions from tree 1 + learning rate * predictions from tree 2 + ... + learning.rate * predictions from tree M.
yhats.mymod <- apply(sapply(2:length(my.model), function(x) learning.rate * predict(my.model[[x]], newdata = test.dat)), 1, sum)
# Calculate RMSE
> sqrt(mean((test.dat$Sepal.Length - yhats.mymod)^2))
[1] 2.612972
I have a few questions
Does my gradient boosting algorithm look right?
Did I calculate the predicted values yhats.mymod correctly?
Yes this looks correct. At each step you are fitting to the psuedo-residuals, which are computed as the derivative of loss with respect to the fit. You have correctly derived this gradient at the start of your question, and even bothered to get the factor of 2 right.
This also looks correct. You are aggregating across the models, weighted by learning rate, just as you did during training.
But to address something that was not asked, I noticed that your training setup has a few quirks.
The iris dataset is split equally between 3 species (setosa, versicolor, virginica) and these are adjacent in the data. Your training data has all of the setosa and versicolor, while the test set has all of the virginica examples. There is no overlap, which will lead to out-of-sample problems. It is preferable to balance your training and test sets to avoid this.
The combination of learning rate and model count looks too low to me. The fit converges as (1-lr)^n. With lr = 1e-3 and n = 1000 you can only model 63.2% of the data magnitude. That is, even if every model predicts every sample correctly, you would be estimating 63.2% of the correct value. Initializing the fit with an average, instead of 0s, would help since then the effect is a regression to the mean instead of just a drag.
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), data = sleepstudy)
To generate a 95% CI, I can use the predictInterval() function from the package merTools.
library(merTools)
head(predictInterval(fm1, level = 0.95, seed = 123, n.sims = 100))
# fit upr lwr
# 1 255.4179 313.8781 184.1400
# 2 273.2944 333.2005 231.3584
# 3 291.8451 342.8701 240.8226
# 4 311.3562 359.2908 250.4980
# 5 330.3671 384.2520 270.7094
# 6 353.4378 409.9307 289.4760
In the documentation, it says about the predictInterval() function
This function provides a way to capture model uncertainty in predictions from multi-level models
fit with lme4. By drawing a sampling distribution for the random and the fixed effects and then
estimating the fitted value across that distribution, it is possible to generate a prediction interval for
fitted values that includes all variation in the model except for variation in the covariance parameters,
theta. This is a much faster alternative than bootstrapping for models fit to medium to large datasets.
My goal is to get all the fitted values instead of the the upper and lower CI i.e. for each row, I need the
original n simulations from which these 95% CI are calculated. I checked the argument in the documentation and
followed this:
head(predictInterval(fm1, n.sims = 100, returnSims = TRUE, seed = 123, level = 0.95))
# fit upr lwr
# 1 255.4179 313.8781 184.1400
# 2 273.2944 333.2005 231.3584
# 3 291.8451 342.8701 240.8226
# 4 311.3562 359.2908 250.4980
# 5 330.3671 384.2520 270.7094
# 6 353.4378 409.9307 289.4760
Instead of getting the 100 simulations, it still gives me the same output. What is it I am doing wrong here?
A second question though I believe this is more of a StatsExchange one.
"By drawing a sampling distribution for the random and the fixed
effects and then."`
How does it draws the sampling distribution if some could explain me?
You can get simulated values if you specify newdata in the predictInterval() function.
predInt <- predictInterval(fm1, newdata = sleepstudy, n.sims = 100,
returnSims = TRUE, seed = 123, level = 0.95)
simValues <- attr(predInt, "sim.results")
Details on how to create sampling distributions of parameters are given in the Detail section of the help page.You can get the estimates of fit, lower and upper boundaries as:
fit <- apply(simValues, 1, function(x){quantile(x, probs=0.500) } )
lwr <- apply(simValues, 1, function(x){quantile(x, probs=0.025) } )
upr <- apply(simValues, 1, function(x){quantile(x, probs=0.975) } )
I'm building a churn model using C5 algorithm in R. After finishing the model and successfully predicting the data, how do I know the top 3 important predictors for each of customer that will churn? So I know the reason why the model classifies -for example- cust A,B,D,F as positive and the others as negative. Is it possible?
Thanks.
Many models have built-in approaches for measuring the aggregate effect of the predictors on the model. The caret package contains a general class for calculating or returning these values including C5.0, JRip, PART, RRF, RandomForest, bagEarth, classbagg, cubist, dsa, earth, fda, gam, gbm, glm, glmnet, lm, multinom, mvr, nnet, pamrtrained, plsda, randomForest, regbagg, rfe, rpart, sbf, and train.
For example,
> library(caret)
> set.seed(1401)
> ctrl <- trainControl(method = 'repeatedcv' , number = 6, , repeats = 2 , classProbs = TRUE)
> C5fit <- train(x = iris[, 1:4], y = iris$Species, method = "C5.0", metric = "ROC", trControl = ctrl)
> varImp(C5fit, scale = FALSE)
C5.0 variable importance
Overall
Petal.Width 100
Sepal.Width 0
Petal.Length 0
Sepal.Length 0
You can plot the trees within the model. If you use a single C5.0 tree, this gives you an easy way to provide the exact reasoning of the tree.
library(C50)
set.seed(1401)
C5tree <- C5.0(x = iris[, 1:4], y = iris$Species, trials = 1) # A single C50 tree
C5imp(C5tree)
plot(C5tree, trial = 0)
If you use boosting (i.e. trials > 1 when you train the trees), then this approach is likely too complicated due to the number of trees.
C5boosted <- C5.0(x = iris[, 1:4], y = iris$Species, trials = 3) # Boost three trees
C5imp(C5boosted)
# Plot each of the trees
for(i in 0:2){ # trials starts counting at 0, see ?plot.C5.0
plot(C5boosted, trial = i)
}
Instead, you can rely on the variable importance for a general report of important variables or use partial dependence plots that show the (non-linear) effect of one variable relative to all other variables. I suggest having a look at package pdp on CRAN.