Resampling based performance measure in caret - r

I perform a penalized logistic regression and I train a model with caret (glmnet).
model_fit <- train(Data[,-1], Data[,1],
method = "glmnet",
family="binomial",
metric = "ROC",
maximize="TRUE",
trControl = ctrl,
preProc = c("center", "scale"),
tuneGrid=expand.grid(.alpha=0.5,.lambda=lambdaSeq)
)
According to the caret documentation, the function train "[...] calculates a resampling based performance measure" and "Across each data set, the performance of held-out samples is calculated and the mean and standard deviation is summarized for each combination."
results is "A data frame" (containing) "the training error rate and values of the tuning parameters."
Is model_fit$results$ROC a vector (with size equal to the size of my tuning parameter lambda) of the mean of the performance measure across resampling? (And NOT the performance measure computed over the whole sample after re-estimating the model over the whole sample for each value of lambda?)

Is model_fit$results$ROC a vector (with size equal to the size of my tuning parameter lambda) of the mean of the performance measure across resampling?
It is; to be precise, the length will be equal to the number of rows of your tuneGrid, which here it happens to coincide with the length of your lambdaSeq (since the only other parameter, alpha, is being held constant).
Here is a quick example, adapted from the caret docs (it is with gbm and Accuracy metric, but the idea is the same):
library(caret)
library(mlbench)
data(Sonar)
set.seed(998)
inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE)
training <- Sonar[ inTraining,]
testing <- Sonar[-inTraining,]
fitControl <- trainControl(method = "cv",
number = 5)
set.seed(825)
gbmGrid <- expand.grid(interaction.depth = 3,
n.trees = (1:3)*50,
shrinkage = 0.1,
n.minobsinnode = 20)
gbmFit1 <- train(Class ~ ., data = training,
method = "gbm",
trControl = fitControl,
tuneGrid = gbmGrid,
## This last option is actually one
## for gbm() that passes through
verbose = FALSE)
Here, gbmGrid has 3 rows, i.e. it is consisted only of three (3) different values of n.trees with the other parameters held constant; hence, the corresponding gbmFit1$results$Accuracy will be a vector of length 3:
gbmGrid
# interaction.depth n.trees shrinkage n.minobsinnode
# 1 3 50 0.1 20
# 2 3 100 0.1 20
# 3 3 150 0.1 20
gbmFit1$results
# shrinkage interaction.depth n.minobsinnode n.trees Accuracy Kappa AccuracySD KappaSD
# 1 0.1 3 20 50 0.7450672 0.4862194 0.05960941 0.1160537
# 2 0.1 3 20 100 0.7829704 0.5623801 0.05364031 0.1085451
# 3 0.1 3 20 150 0.7765188 0.5498957 0.05263735 0.1061387
gbmFit1$results$Accuracy
# [1] 0.7450672 0.7829704 0.7765188
Each of the 3 Accuracy values returned is the result of the metric in the validation folds of the 5-fold cross validation we have used as a resampling technique; more precisely, it is the mean of the validation accuracies computed in these 5 folds (and you can see that there is an AccuracySD column, containing also its standard deviation).
And NOT the performance measure computed over the whole sample after re-estimating the model over the whole sample for each value of lambda?
Correct, it is not that.

Related

Caret classification thresholds

I have been using a gbm in the caret package in Rstudioto find the probability for the occurrence of a failure.
I have used Youden's J to find a threshold for the best classification, which is 0.63. How do I now use this threshold? I presume the best way to do this is to somehow incorporated the threshold into the gbm model in caret to get more accurate predictions, and then rerun the model on the training data again? Currently it defaults to 0.5 and I can't find an obvious way to update the threshold.
Alternatively, is the threshold just used to separate the test data predictions into the correct class? This seems more straight forward, but how then do I reflect the change in the ROC_AUC plot, assuming the probability should be updated based on the new threshold?
Any help would be gratefully received. Thanks
EDIT: The full code I am working on is as follows:
library(datasets)
library(caret)
library(MLeval)
library(dplyr)
data(iris)
data <- as.data.frame(iris)
# create class
data$class <- ifelse(data$Species == "setosa", "yes", "no")
# split into train and test
train <- data %>% sample_frac(.70)
test <- data %>% sample_frac(.30)
# Set up control function for training
ctrl <- trainControl(method = "cv",
number = 5,
returnResamp = 'none',
summaryFunction = twoClassSummary,
classProbs = T,
savePredictions = T,
verboseIter = F)
# Set up trainng grid - this is based on a hyper-parameter tune that was recently done
gbmGrid <- expand.grid(interaction.depth = 10,
n.trees = 20000,
shrinkage = 0.01,
n.minobsinnode = 4)
# Build a standard classifier using a gradient boosted machine
set.seed(5627)
gbm_iris <- train(class ~ .,
data = train,
method = "gbm",
metric = "ROC",
tuneGrid = gbmGrid,
verbose = FALSE,
trControl = ctrl)
# Calcuate best thresholds
caret::thresholder(gbm_iris, threshold = seq(.01,0.99, by = 0.01), final = TRUE, statistics = "all")
pred <- predict(gbm_iris, newdata = test, type = "prob")
roc <- evalm(data.frame(pred, test$class))
There are several problems in your code. I will use the PimaIndiansDiabetes data set from mlbench since it is better suited then the iris data set.
First of all for splitting data into train and test sets the code:
train <- data %>% sample_frac(.70)
test <- data %>% sample_frac(.30)
is not suited since some rows occurring in the train set will also occur in the test set.
Additionally avoid to use function names as object names, it will save you much headache in the long run.
data(iris)
data <- as.data.frame(iris) #bad object name
To the example:
library(caret)
library(ModelMetrics)
library(dplyr)
library(mlbench)
data(PimaIndiansDiabetes, package = "mlbench")
Create train and test sets, you may use base R sample to sample rows or caret::createDataPartition. createDataPartition is preferable since it tries to preserve the distribution of the response.
set.seed(123)
ind <- createDataPartition(PimaIndiansDiabetes$diabetes, 0.7)
tr <- PimaIndiansDiabetes[ind$Resample1,]
ts <- PimaIndiansDiabetes[-ind$Resample1,]
This way no rows in the train set will be in the test set.
Lets create the model:
ctrl <- trainControl(method = "cv",
number = 5,
returnResamp = 'none',
summaryFunction = twoClassSummary,
classProbs = T,
savePredictions = T,
verboseIter = F)
gbmGrid <- expand.grid(interaction.depth = 10,
n.trees = 200,
shrinkage = 0.01,
n.minobsinnode = 4)
set.seed(5627)
gbm_pima <- train(diabetes ~ .,
data = tr,
method = "gbm", #use xgboost
metric = "ROC",
tuneGrid = gbmGrid,
verbose = FALSE,
trControl = ctrl)
create a vector of probabilities for thresholder
probs <- seq(.1, 0.9, by = 0.02)
ths <- thresholder(gbm_pima,
threshold = probs,
final = TRUE,
statistics = "all")
head(ths)
Sensitivity Specificity Pos Pred Value Neg Pred Value Precision Recall F1 Prevalence Detection Rate Detection Prevalence
1 200 10 0.01 4 0.10 1.000 0.02222222 0.6562315 1.0000000 0.6562315 1.000 0.7924209 0.6510595 0.6510595 0.9922078
2 200 10 0.01 4 0.12 1.000 0.05213675 0.6633439 1.0000000 0.6633439 1.000 0.7975413 0.6510595 0.6510595 0.9817840
3 200 10 0.01 4 0.14 0.992 0.05954416 0.6633932 0.8666667 0.6633932 0.992 0.7949393 0.6510595 0.6458647 0.9739918
4 200 10 0.01 4 0.16 0.984 0.07435897 0.6654277 0.7936508 0.6654277 0.984 0.7936383 0.6510595 0.6406699 0.9636022
5 200 10 0.01 4 0.18 0.984 0.14188034 0.6821550 0.8750000 0.6821550 0.984 0.8053941 0.6510595 0.6406699 0.9401230
6 200 10 0.01 4 0.20 0.980 0.17179487 0.6886786 0.8833333 0.6886786 0.980 0.8086204 0.6510595 0.6380725 0.9271018
Balanced Accuracy Accuracy Kappa J Dist
1 0.5111111 0.6588517 0.02833828 0.02222222 0.9777778
2 0.5260684 0.6692755 0.06586592 0.05213675 0.9478632
3 0.5257721 0.6666781 0.06435166 0.05154416 0.9406357
4 0.5291795 0.6666781 0.07134190 0.05835897 0.9260250
5 0.5629402 0.6901572 0.15350721 0.12588034 0.8585308
6 0.5758974 0.6979836 0.18460584 0.15179487 0.8288729
extract the threshold probability based on your preferred metric
ths %>%
mutate(prob = probs) %>%
filter(J == max(J)) %>%
pull(prob) -> thresh_prob
thresh_prob
0.74
predict on test data
pred <- predict(gbm_pima, newdata = ts, type = "prob")
create a numeric response (0 or 1) based on the response in the test set since this is needed for the functions from package ModelMetrics
real <- as.numeric(factor(ts$diabetes))-1
ModelMetrics::sensitivity(real, pred$pos, cutoff = thresh_prob)
0.2238806 #based on this it is clear the threshold chosen is not optimal on this test data
ModelMetrics::specificity(real, pred$pos, cutoff = thresh_prob)
0.956
ModelMetrics::kappa(real, pred$pos, cutoff = thresh_prob)
0.2144026 #based on this it is clear the threshold chosen is not optimal on this test data
ModelMetrics::mcc(real, pred$pos, cutoff = thresh_prob)
0.2776309 #based on this it is clear the threshold chosen is not optimal on this test data
ModelMetrics::auc(real, pred$pos)
0.8047463 #decent AUC and low mcc and kappa indicate a poor choice of threshold
Auc is a measure over all thresholds so it does not require specification of the cutoff threshold.
Since only one train/test split is used the performance evaluation will be biased. Best is to use nested resampling so the same can be evaluated over several train/test splits. Here is a way to performed nested resampling.
EDIT: Answer to the questions in comments.
To create the roc curve you do not need to calculate sensitivity and specificity on all thresholds you can just use a specified package for such a task. The results are probability going to be more trustworthy.
I prefer using the pROC package:
library(pROC)
roc.obj <- roc(real, pred$pos)
plot(roc.obj, print.thres = "best")
The best threshold on the figure is the threshold that gives the highest specificity + sensitivity on the test data. It is clear that this threshold (0.289) is much lower compared to the threshold obtained based on cross validated predictions (0.74). This is the reason I said there will be considerable optimistic bias if you adjust the threshold on the cross-validated predictions and use thus obtained performance as an indicator of threshold success.
In the above example not tuning the threshold would have resulted in better performance on the test set. This might hold true in general for the Pima Indians data set or this might be a case of an unfortunate train/test split. So it is best to validate this sort of thing using nested resampling.

How to set a ppv in caret for random forest in r?

So I'm interested in creating a model that optimizes PPV. I've create a RF model (below) that outputs me a confusion matrix, for which I then manually calculate sensitivity, specificity, ppv, npv, and F1. I know right now accuracy is optimized but I'm willing to forgo sensitivity and specificity to get a much higher ppv.
data_ctrl_null <- trainControl(method="cv", number = 5, classProbs = TRUE, summaryFunction=twoClassSummary, savePredictions=T, sampling=NULL)
set.seed(5368)
model_htn_df <- train(outcome ~ ., data=htn_df, ntree = 1000, tuneGrid = data.frame(mtry = 38), trControl = data_ctrl_null, method= "rf",
preProc=c("center","scale"),metric="ROC", importance=TRUE)
model_htn_df$finalModel #provides confusion matrix
Results:
Call:
randomForest(x = x, y = y, ntree = 1000, mtry = param$mtry, importance = TRUE)
Type of random forest: classification
Number of trees: 1000
No. of variables tried at each split: 38
OOB estimate of error rate: 16.2%
Confusion matrix:
no yes class.error
no 274 19 0.06484642
yes 45 57 0.44117647
My manual calculation: sen = 55.9% spec = 93.5%, ppv = 75.0%, npv = 85.9% (The confusion matrix switches my no and yes as outcomes, so I also switch the numbers when I calculate the performance metrics.)
So what do I need to do to get a PPV = 90%?
This is a similar question, but I'm not really following it.
We define a function to calculate PPV and return the results with a name:
PPV <- function (data,lev = NULL,model = NULL) {
value <- posPredValue(data$pred,data$obs, positive = lev[1])
c(PPV=value)
}
Let's say we have the following data:
library(randomForest)
library(caret)
data=iris
data$Species = ifelse(data$Species == "versicolor","versi","others")
trn = sample(nrow(iris),100)
Then we train by specifying PPV to be the metric:
mdl <- train(Species ~ ., data = data[trn,],
method = "rf",
metric = "PPV",
trControl = trainControl(summaryFunction = PPV,
classProbs = TRUE))
Random Forest
100 samples
4 predictor
2 classes: 'others', 'versi'
No pre-processing
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 100, 100, 100, 100, 100, 100, ...
Resampling results across tuning parameters:
mtry PPV
2 0.9682811
3 0.9681759
4 0.9648426
PPV was used to select the optimal model using the largest value.
The final value used for the model was mtry = 2.
Now you can see it is trained on PPV. However you cannot force the training to achieve a PPV of 0.9.. It really depends on the data, if your independent variables have no predictive power, it will not improve however much you train it right?

R RF unbalanced classes low negative predicted value on unseen data compared to train

I have built a Random Forest model for predicting if a customer is doing operations regarding to fraud or not. It is a large an a quite unbalanced sample, with 3% cases of fraud, and I want to predict the minority class (fraud).
I balance the data (50% each) and build the RF. So far, I have a good model with an overall accuracy of ~80% and a +70% fraud predicted correctly. But when I try the model on unseen data (test), although the overall accuracy is good, the negative predicted value (fraud) is really low compared to the training data (13% only vs +70%).
I have tried increasing the sample size, increasing the balanced categories, tuning RF parameters, ..., but none of them have worked well, with similar results. Am I overfitting somehow? What can I do to improve fraud detection (negative predicted value)
on unseen data?
Here is the code and results:
set.seed(1234)
#train and test sets
model <- sample(nrow(dataset), 0.7 * nrow(dataset))
train <- dataset[model, ]
test <- dataset[-model, ]
#Balance the data
balanced <- ovun.sample(custom21_type ~ ., data = train, method = "over",p = 0.5, seed = 1)$data
table(balanced$custom21_type)
0 1
5813 5861
#build the RF
rf5 = randomForest(custom21_type~.,ntree = 100,data = balanced,importance = TRUE,mtry=3,keep.inbag=TRUE)
rf5
Call:
randomForest(formula = custom21_type ~ ., data = balanced, ntree = 100, importance = TRUE, mtry = 3, keep.inbag = TRUE)
Type of random forest: classification
Number of trees: 100
No. of variables tried at each split: 3
OOB estimate of error rate: 21.47%
Confusion matrix:
0 1 class.error
0 4713 1100 0.1892310
1 1406 4455 0.2398908
#test on unseen data
predicted <- predict(rf5, newdata=test)
confusionMatrix(predicted,test$custom21_type)
Confusion Matrix and Statistics
Reference
Prediction 0 1
0 59722 559
1 13188 1938
Accuracy : 0.8177
95% CI : (0.8149, 0.8204)
No Information Rate : 0.9669
P-Value [Acc > NIR] : 1
Kappa : 0.1729
Mcnemar's Test P-Value : <2e-16
Sensitivity : 0.8191
Specificity : 0.7761
Pos Pred Value : 0.9907
Neg Pred Value : 0.1281
Prevalence : 0.9669
Detection Rate : 0.7920
Detection Prevalence : 0.7994
Balanced Accuracy : 0.7976
'Positive' Class : 0
First I notice that you are not using any cross validation. Including this will help add variation in the data used to train and will help reduce overfitting. Additionally we are going to user C.50 in place of randomForest because it is more robust and gives more penalties to type 1 errors.
One thing you may consider is actually not having a 50-50 balance split in the train data, but making it more 80-20. This is so that the underbalanced class is not over sampled. I am sure this is leading to overfitting and the failure for your model to classify novel examples as negative.
RUN THIS AFTER YOU CREATE THE RE-BALANCED DATA (p=.2)
library(caret)
#set up you cross validation
Control <- trainControl(
summaryFunction = twoClassSummary, #displays model score not confusion matrix
classProbs = TRUE, #important for the summaryFunction
verboseIter = TRUE, #tones down output
savePredictions = TRUE,
method = "repeatedcv", #repeated cross validation, 10 folds, 3 times
repeats = 3,
number = 10,
allowParallel = TRUE
)
Now I read in the comments that all your variables are categorical. This is optimal for NaiveBayes algorithms. However if you have any numerical data you will need to preprocess (scale, normalize, and NA input) as is standard procedure. We are also going to implement a grid-searching process.
IF YOUR DATA IS ALL CATEGORICAL
model_nb <- train(
x = balanced[,-(which(colnames(balanced))%in% "custom21_type")],
y= balanced$custom21_type,
metric = "ROC",
method = "nb",
trControl = Control,
tuneGrid = data.frame(fL=c(0,0.5,1.0), usekernel = TRUE,
adjust=c(0,0.5,1.0)))
IF YOU WOULD LIKE A RF APPROACH (make sure to preprocess if data is numeric)
model_C5 <- train(
x = balanced[,-(which(colnames(balanced))%in% "custom21_type")],
y= balanced$custom21_type,
metric = "ROC",
method = "C5.0",
trControl = Control,
tuneGrid = tuneGrid=expand.grid(.model = "tree",.trials = c(1,5,10), .winnow = F)))
Now we predict
C5_predict<-predict(model_C5, test, type = "raw")
NB_predict<-predict(model_nb, test, type = "raw")
confusionMatrix(C5_predict,test$custom21_type)
confusionMatrix(nb_predict,test$custom21_type)
EDIT:
try adjusting the cost matrix below. What this one does is penalize type two errors twice as bad as type one errors.
cost_mat <- matrix(c(0, 2, 1, 0), nrow = 2)
rownames(cost_mat) <- colnames(cost_mat) <- c("bad", "good")
cost_mod <- C5.0( x = balanced[,-(which(colnames(balanced))%in%
"custom21_type")],
y= balanced$custom21_type,
costs = cost_mat)
summary(cost_mod)
EDIT 2:
predicted <- predict(rf5, newdata=test, type="prob")
will give you the actual probabilities for each prediction. The default cut-off is .5. I.e. everything above .5 will get classified as 0 and everything below as 1. So you can adjust this cutoff to help with unbalanced classes.
ifelse(predicted[,1] < .4, 1, predicted[,1])

How to do recursive feature elimination with logistic regression?

Can someone provide me a detailed example of using caret's rfe function with the glm or glmnet model? I tried something like this:
rfe_records <- Example_data_frame
rfe_ctrl <- rfeControl(functions = caretFuncs, method = "repeatedcv", repeats = 5, verbose = TRUE, classProbs = TRUE, summaryFunction = twoClassSummary)
number_predictors <- dim(rfe_records)[2]-1
x <- dplyr::select(rfe_records, -outcomeVariable)
y <- as.numeric(rfe_records$outcomeVariable)
glmProfile <- rfe(x, y, rfeControl = rfe_ctrl, sizes = c(1:number_predictors), method="glmnet", preProc = c("center", "scale"), metric = "Accuracy")
print(glmProfile)
But the results I'm getting are not what I needed. I specified Accuracy as the metric but I got:
Recursive feature selection
Outer resampling method: Cross-Validated (10 fold, repeated 5 times)
Resampling performance over subset size:
Variables RMSE Rsquared RMSESD RsquaredSD Selected
1 0.5047 0.10830 0.04056 0.11869 *
2 0.5058 0.09386 0.04728 0.11332
3 0.5117 0.08565 0.04999 0.10211
4 0.5139 0.07490 0.05042 0.10048
5 0.5166 0.07678 0.05456 0.09966
6 0.5202 0.08203 0.06174 0.10822
7 0.5187 0.08471 0.06207 0.10893
8 0.5168 0.07850 0.05939 0.09697
9 0.5175 0.08228 0.05966 0.10068
10 0.5176 0.08180 0.05980 0.10042
11 0.5179 0.08015 0.05950 0.09905
The top 1 variables (out of 1):
varName
According to this page caret uses the class of the outcome variable when it determines whether to use regression or classification with a function like glmnet that can do either. According to your code, you specified the outcome variable to be numeric with as.numeric() so glmnet chose to do regression, not classification as you intended. Specify your outcome variable as a two-level factor to get classification instead.

Sensitivity too low where as AUC very high in caret train crossvalidation resampling results

How should I interpret : Sensitivity too low where as AUC very high in caret train crossvalidation resampling results on the data I have trained.
Is the model performance bad ?
It usually occurs when there is a class imbalance and the default 50% probability cutoff produces poor predictions but the class probabilities, while poorly calibrated, do well at separating classes well.
Here is an example:
library(caret)
set.seed(1)
dat <- twoClassSim(500, intercept = 10)
set.seed(2)
mod <- train(Class ~ ., data = dat, method = "svmRadial",
tuneLength = 10,
preProc = c("center", "scale"),
metric = "ROC",
trControl = trainControl(search = "random",
classProbs = TRUE,
summaryFunction = twoClassSummary))
The results are
> mod
Support Vector Machines with Radial Basis Function Kernel
500 samples
15 predictor
2 classes: 'Class1', 'Class2'
Pre-processing: centered (15), scaled (15)
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 500, 500, 500, 500, 500, 500, ...
Resampling results across tuning parameters:
sigma C ROC Sens Spec
0.01124608 21.27349102 0.9615725 0.33389177 0.9910125
0.01330079 419.19384543 0.9579240 0.34620779 0.9914320
0.01942163 85.16782989 0.9535367 0.33211255 0.9920583
0.02168484 632.31603140 0.9516538 0.33065224 0.9911863
0.02395674 89.03035078 0.9497636 0.32504906 0.9909382
0.03988581 3.58620979 0.9392330 0.25279365 0.9920611
0.04204420 699.55658836 0.9356568 0.23920635 0.9931667
0.05263619 0.06127242 0.9265497 0.28134921 0.9839818
0.05364313 34.57839446 0.9264506 0.19560317 0.9934489
0.08838604 47.84104078 0.9029791 0.06296825 0.9955034
ROC was used to select the optimal model using the largest value.
The final values used for the model were sigma = 0.01124608 and C = 21.27349.

Resources