Multiple evaluation metrics in classification using caret package [duplicate] - r

I used caret for logistic regression in R:
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10,
savePredictions = TRUE)
mod_fit <- train(Y ~ ., data=df, method="glm", family="binomial",
trControl = ctrl)
print(mod_fit)
The default metric printed is accuracy and Cohen kappa. I want to extract the matching metrics like sensitivity, specificity, positive predictive value etc. but I cannot find an easy way to do it. The final model is provided but it is trained on all the data (as far as I can tell from documentation), so I cannot use it for predicting anew.
Confusion matrix calculates all required parameters, but passing it as a summary function doesn't work:
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10,
savePredictions = TRUE, summaryFunction = confusionMatrix)
mod_fit <- train(Y ~ ., data=df, method="glm", family="binomial",
trControl = ctrl)
Error: `data` and `reference` should be factors with the same levels.
13.
stop("`data` and `reference` should be factors with the same levels.",
call. = FALSE)
12.
confusionMatrix.default(testOutput, lev, method)
11.
ctrl$summaryFunction(testOutput, lev, method)
Is there a way to extract this information in addition to accuracy and kappa, or somehow find it in the train_object returned by the caret train?
Thanks in advance!

Caret already has summary functions to output all the metrics you mention:
defaultSummary outputs Accuracy and Kappa
twoClassSummary outputs AUC (area under the ROC curve - see last line of answer), sensitivity and specificity
prSummary outputs precision and recall
in order to get combined metrics you can write your own summary function which combines the outputs of these three:
library(caret)
MySummary <- function(data, lev = NULL, model = NULL){
a1 <- defaultSummary(data, lev, model)
b1 <- twoClassSummary(data, lev, model)
c1 <- prSummary(data, lev, model)
out <- c(a1, b1, c1)
out}
lets try on the Sonar data set:
library(mlbench)
data("Sonar")
when defining the train control it is important to set classProbs = TRUE since some of these metrics (ROC and prAUC) can not be calculated based on predicted class but based on the predicted probabilities.
ctrl <- trainControl(method = "repeatedcv",
number = 10,
savePredictions = TRUE,
summaryFunction = MySummary,
classProbs = TRUE)
Now fit the model of your choice:
mod_fit <- train(Class ~.,
data = Sonar,
method = "rf",
trControl = ctrl)
mod_fit$results
#output
mtry Accuracy Kappa ROC Sens Spec AUC Precision Recall F AccuracySD KappaSD
1 2 0.8364069 0.6666364 0.9454798 0.9280303 0.7333333 0.8683726 0.8121087 0.9280303 0.8621526 0.10570484 0.2162077
2 31 0.8179870 0.6307880 0.9208081 0.8840909 0.7411111 0.8450612 0.8074942 0.8840909 0.8374326 0.06076222 0.1221844
3 60 0.8034632 0.6017979 0.9049242 0.8659091 0.7311111 0.8332068 0.7966889 0.8659091 0.8229330 0.06795824 0.1369086
ROCSD SensSD SpecSD AUCSD PrecisionSD RecallSD FSD
1 0.04393947 0.05727927 0.1948585 0.03410854 0.12717667 0.05727927 0.08482963
2 0.04995650 0.11053858 0.1398657 0.04694993 0.09075782 0.11053858 0.05772388
3 0.04965178 0.12047598 0.1387580 0.04820979 0.08951728 0.12047598 0.06715206
in this output
ROC is in fact the area under the ROC curve - usually called AUC
and
AUC is the area under the precision-recall curve across all cutoffs.

Related

How to set a ppv in caret for random forest in r?

So I'm interested in creating a model that optimizes PPV. I've create a RF model (below) that outputs me a confusion matrix, for which I then manually calculate sensitivity, specificity, ppv, npv, and F1. I know right now accuracy is optimized but I'm willing to forgo sensitivity and specificity to get a much higher ppv.
data_ctrl_null <- trainControl(method="cv", number = 5, classProbs = TRUE, summaryFunction=twoClassSummary, savePredictions=T, sampling=NULL)
set.seed(5368)
model_htn_df <- train(outcome ~ ., data=htn_df, ntree = 1000, tuneGrid = data.frame(mtry = 38), trControl = data_ctrl_null, method= "rf",
preProc=c("center","scale"),metric="ROC", importance=TRUE)
model_htn_df$finalModel #provides confusion matrix
Results:
Call:
randomForest(x = x, y = y, ntree = 1000, mtry = param$mtry, importance = TRUE)
Type of random forest: classification
Number of trees: 1000
No. of variables tried at each split: 38
OOB estimate of error rate: 16.2%
Confusion matrix:
no yes class.error
no 274 19 0.06484642
yes 45 57 0.44117647
My manual calculation: sen = 55.9% spec = 93.5%, ppv = 75.0%, npv = 85.9% (The confusion matrix switches my no and yes as outcomes, so I also switch the numbers when I calculate the performance metrics.)
So what do I need to do to get a PPV = 90%?
This is a similar question, but I'm not really following it.
We define a function to calculate PPV and return the results with a name:
PPV <- function (data,lev = NULL,model = NULL) {
value <- posPredValue(data$pred,data$obs, positive = lev[1])
c(PPV=value)
}
Let's say we have the following data:
library(randomForest)
library(caret)
data=iris
data$Species = ifelse(data$Species == "versicolor","versi","others")
trn = sample(nrow(iris),100)
Then we train by specifying PPV to be the metric:
mdl <- train(Species ~ ., data = data[trn,],
method = "rf",
metric = "PPV",
trControl = trainControl(summaryFunction = PPV,
classProbs = TRUE))
Random Forest
100 samples
4 predictor
2 classes: 'others', 'versi'
No pre-processing
Resampling: Bootstrapped (25 reps)
Summary of sample sizes: 100, 100, 100, 100, 100, 100, ...
Resampling results across tuning parameters:
mtry PPV
2 0.9682811
3 0.9681759
4 0.9648426
PPV was used to select the optimal model using the largest value.
The final value used for the model was mtry = 2.
Now you can see it is trained on PPV. However you cannot force the training to achieve a PPV of 0.9.. It really depends on the data, if your independent variables have no predictive power, it will not improve however much you train it right?

How to calculate AUC under twoClassSummary?

here is my code:
train <- data.frame(***contain label, feature group 1 and feature group 2***)
formula <- label ~ features group 1
ctrl <- trainControl(method = "repeatedcv",
number = 10,
repeats = 5,
summaryFunction = twoClassSummary,
classProbs = T)
fit <- train(formula,
data = train,
method = "glm",
metric = "ROC",
trControl = ctrl,
na.action = na.omit)
pred <- predict(fit, train)
my question is: How to calculate the AUC of pred?
I ve tried prSummary, ROCR and pROC, didn't work, it seems like that I can not calculate AUC when both of the obs and pred are exactly the same(levels-wise).
I m wondering if I can train with AUC as metric, how can I can't show the AUC?
p.s.
> levels(train$label)
[1] "classA" "classB"
> levels(as.factor(pred))
[1] "classA" "classB"
btw what I am doing is: fit multiple algo with caret and rank them by AUC, then I can choose the optimal one (base on AUC).
*reproducible example:
train set: iris
feature g1: first 2 features
feature g2: last 2 features
seed: 123*
this one could be the possible answer but I am not so sure if it s right, tell me if i m wrong.
response = as.factor(as.numeric(train$label))
predictor = as.vector(as.numeric(pred))
library(pROC)
result = as.numeric(roc(response, predictor)$auc)
btw since pROC run very slow, can anyone help me to convert this under ROCR package? thx a lot :)

How to fix "The metric "Accuracy" was not in the result set. AUC will be used instead"

I am trying to run a logistic regression on a classification problem
the dependent variable "SUBSCRIBEDYN" is a factor with 2 levels ("Yes" and "No")
train.control <- trainControl(method = "repeatedcv",
number = 10,
repeats = 10,
verboseIter = F,
classProbs = T,
summaryFunction = prSummary)
set.seed(13)
simple.logistic.regression <- caret::train(SUBSCRIBEDYN ~ .,
data = train_data,
method = "glm",
metric = "Accuracy",
trControl = train.control)
simple.logistic.regression`
However, it does not accept Accuracy as a metric
"The metric "Accuracy" was not in the result set. AUC will be used instead"
For a classification model with 2 levels, you should use metric="ROC". metric="Accuracy" is used for multiple classes. However, after training the model, you can use the confusion matrix to retrieve the accuracy, for example using the function confusionMatrix().

Additional metrics in caret - PPV, sensitivity, specificity

I used caret for logistic regression in R:
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10,
savePredictions = TRUE)
mod_fit <- train(Y ~ ., data=df, method="glm", family="binomial",
trControl = ctrl)
print(mod_fit)
The default metric printed is accuracy and Cohen kappa. I want to extract the matching metrics like sensitivity, specificity, positive predictive value etc. but I cannot find an easy way to do it. The final model is provided but it is trained on all the data (as far as I can tell from documentation), so I cannot use it for predicting anew.
Confusion matrix calculates all required parameters, but passing it as a summary function doesn't work:
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10,
savePredictions = TRUE, summaryFunction = confusionMatrix)
mod_fit <- train(Y ~ ., data=df, method="glm", family="binomial",
trControl = ctrl)
Error: `data` and `reference` should be factors with the same levels.
13.
stop("`data` and `reference` should be factors with the same levels.",
call. = FALSE)
12.
confusionMatrix.default(testOutput, lev, method)
11.
ctrl$summaryFunction(testOutput, lev, method)
Is there a way to extract this information in addition to accuracy and kappa, or somehow find it in the train_object returned by the caret train?
Thanks in advance!
Caret already has summary functions to output all the metrics you mention:
defaultSummary outputs Accuracy and Kappa
twoClassSummary outputs AUC (area under the ROC curve - see last line of answer), sensitivity and specificity
prSummary outputs precision and recall
in order to get combined metrics you can write your own summary function which combines the outputs of these three:
library(caret)
MySummary <- function(data, lev = NULL, model = NULL){
a1 <- defaultSummary(data, lev, model)
b1 <- twoClassSummary(data, lev, model)
c1 <- prSummary(data, lev, model)
out <- c(a1, b1, c1)
out}
lets try on the Sonar data set:
library(mlbench)
data("Sonar")
when defining the train control it is important to set classProbs = TRUE since some of these metrics (ROC and prAUC) can not be calculated based on predicted class but based on the predicted probabilities.
ctrl <- trainControl(method = "repeatedcv",
number = 10,
savePredictions = TRUE,
summaryFunction = MySummary,
classProbs = TRUE)
Now fit the model of your choice:
mod_fit <- train(Class ~.,
data = Sonar,
method = "rf",
trControl = ctrl)
mod_fit$results
#output
mtry Accuracy Kappa ROC Sens Spec AUC Precision Recall F AccuracySD KappaSD
1 2 0.8364069 0.6666364 0.9454798 0.9280303 0.7333333 0.8683726 0.8121087 0.9280303 0.8621526 0.10570484 0.2162077
2 31 0.8179870 0.6307880 0.9208081 0.8840909 0.7411111 0.8450612 0.8074942 0.8840909 0.8374326 0.06076222 0.1221844
3 60 0.8034632 0.6017979 0.9049242 0.8659091 0.7311111 0.8332068 0.7966889 0.8659091 0.8229330 0.06795824 0.1369086
ROCSD SensSD SpecSD AUCSD PrecisionSD RecallSD FSD
1 0.04393947 0.05727927 0.1948585 0.03410854 0.12717667 0.05727927 0.08482963
2 0.04995650 0.11053858 0.1398657 0.04694993 0.09075782 0.11053858 0.05772388
3 0.04965178 0.12047598 0.1387580 0.04820979 0.08951728 0.12047598 0.06715206
in this output
ROC is in fact the area under the ROC curve - usually called AUC
and
AUC is the area under the precision-recall curve across all cutoffs.

Variable importance with ranger

I trained a random forest using caret + ranger.
fit <- train(
y ~ x1 + x2
,data = total_set
,method = "ranger"
,trControl = trainControl(method="cv", number = 5, allowParallel = TRUE, verbose = TRUE)
,tuneGrid = expand.grid(mtry = c(4,5,6))
,importance = 'impurity'
)
Now I'd like to see the importance of variables. However, none of these work :
> importance(fit)
Error in UseMethod("importance") : no applicable method for 'importance' applied to an object of class "c('train', 'train.formula')"
> fit$variable.importance
NULL
> fit$importance
NULL
> fit
Random Forest
217380 samples
32 predictors
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 173904, 173904, 173904, 173904, 173904
Resampling results across tuning parameters:
mtry RMSE Rsquared
4 0.03640464 0.5378731
5 0.03645528 0.5366478
6 0.03651451 0.5352838
RMSE was used to select the optimal model using the smallest value.
The final value used for the model was mtry = 4.
Any idea if & how I can get it ?
Thanks.
varImp(fit) will get it for you.
To figure that out, I looked at names(fit), which led me to names(fit$modelInfo) - then you'll see varImp as one of the options.
according to #fmalaussena
set.seed(123)
ctrl <- trainControl(method = 'cv',
number = 10,
classProbs = TRUE,
savePredictions = TRUE,
verboseIter = TRUE)
rfFit <- train(Species ~ .,
data = iris,
method = "ranger",
importance = "permutation", #***
trControl = ctrl,
verbose = T)
You can pass either "permutation" or "impurity" to argument importance.
The description for both value can be found here: https://alexisperrier.com/datascience/2015/08/27/feature-importance-random-forests-gini-accuracy.html
For 'ranger' package you could call an importance with
fit$variable.importance
As a side note, you could see the all available outputs for the model using str()
str(fit)

Resources