I want to perform a logistic regression with the train() function from the caret package. My model looks something like that:
model <- train(Y ~.,
data = train_data,
family = "binomial",
method = "glmnet")
With the resulting model, I want to make predictions:
pred <- predict(model, newdata = test_data, s = "lambda.min", type = "prob")
Now, I want to evaluate how good the model predictions are in comparison with the actual test data. For this I know how to receive the ROC and AUC. However I am also interested in receiveing the BRIER SCORE. The formula for the Brier Score is almost identical to the MSE.
The problem I am facing, is that the type argument in predict only allows "prob" (or "class" which I am not interested in) which gives the probability of one prediction beeing a ONE (e.g. 0.64) , and the complementing probability of beeing a ZERO (e.g. 0.37). For the Brier Score however, I need One probability estimate for each prediction that contains the information of both (e.g. a value above 0.5 would indicate a 1 and a value below 0.5 would indicate a 0).
I have not found any solution for receiving the Brier Score in the caret package. I am aware that with the package cv.glmnet the predict function allows the argument "response" which would solve my problem. However, for personal preferences I would like to stay with the caretpackage.
Thanks for the help!
If we go by the wiki definition of brier score:
The most common formulation of the Brier score is
where f_t is the probability that was forecast, o_t the actual outcome of the (0 or 1) and N is the number of forecasting instances.
In R, if your label is a factor, then the logistic regression will always predict with respect to the 2nd level, meaning you just calculate the probability and 0/1 with respect to that. For example:
library(caret)
idx = sample(nrow(iris),100)
data = iris
data$Species = factor(ifelse(data$Species=="versicolor","v","o"))
levels(data$Species)
[1] "o" "v"
In this case, o is 0 and v is 1.
train_data = data[idx,]
test_data = data[-idx,]
model <- train(Species ~.,data = train_data,family = "binomial",method = "glmnet")
pred <- predict(model, newdata = test_data)
So we can see the probability of the class:
head(pred)
o v
1 0.8367885 0.16321154
2 0.7970508 0.20294924
3 0.6383656 0.36163437
4 0.9510763 0.04892370
5 0.9370721 0.06292789
To calculate the score:
f_t = pred[,2]
o_t = as.numeric(test_data$Species)-1
mean((f_t - o_t)^2)
[1] 0.32
I use the Brier score to tune my models in caret for binary classification. I ensure that the "positive" class is the second class, which is the default when you label your response "0:1". Then I created this master summary function, based on caret's own suite of summary functions, to return all the metrics I want to see:
BigSummary <- function (data, lev = NULL, model = NULL) {
pr_auc <- try(MLmetrics::PRAUC(data[, lev[2]],
ifelse(data$obs == lev[2], 1, 0)),
silent = TRUE)
brscore <- try(mean((data[, lev[2]] - ifelse(data$obs == lev[2], 1, 0)) ^ 2),
silent = TRUE)
rocObject <- try(pROC::roc(ifelse(data$obs == lev[2], 1, 0), data[, lev[2]],
direction = "<", quiet = TRUE), silent = TRUE)
if (inherits(pr_auc, "try-error")) pr_auc <- NA
if (inherits(brscore, "try-error")) brscore <- NA
rocAUC <- if (inherits(rocObject, "try-error")) {
NA
} else {
rocObject$auc
}
tmp <- unlist(e1071::classAgreement(table(data$obs,
data$pred)))[c("diag", "kappa")]
out <- c(Acc = tmp[[1]],
Kappa = tmp[[2]],
AUCROC = rocAUC,
AUCPR = pr_auc,
Brier = brscore,
Precision = caret:::precision.default(data = data$pred,
reference = data$obs,
relevant = lev[2]),
Recall = caret:::recall.default(data = data$pred,
reference = data$obs,
relevant = lev[2]),
F = caret:::F_meas.default(data = data$pred, reference = data$obs,
relevant = lev[2]))
out
}
Now I can simply pass summaryFunction = BigSummary in trainControl and then metric = "Brier", maximize = FALSE in the train call.
Related
I am running some random forest classification models with caret (using ranger). I am looping through various dependent variables who all have five levels. It is therefore a multi-classification problem. Several of my variables suffer from rather big class imbalances. Right now, I am training the ranger model in caret using a custom f1 function and summaryFunction = f1 (like the one presented here Training Model in Caret Using F1 Metric )
The code for the f1 function (for training) looks like this (together with the code for training the actual model):
f1 <- function(data, lev = NULL, model = NULL) {
f1_val <- MLmetrics::F1_Score(y_pred = data$pred,
y_true = data$obs,
positive = lev[1])
c(F1 = f1_val)
}
traincntr <- trainControl(method = "cv",
number = 5,
search = "grid",
verboseIter = TRUE,
summaryFunction = f1,
classProbs = TRUE,
allowParallel = TRUE)
fit_rf <- caret::train(y = y,
x = predictors,
method = 'ranger',
metric = "F1",
tuneGrid = tuning_grid_rf,
trControl = traincntr,
importance = "permutation",
verbose = TRUE)
After, when evaluating the models prediction on the test set, I run the following custom function (also from the link above):
f1_score <- function(predicted, expected, positive.class="1") {
predicted <- factor(as.character(predicted), levels=unique(as.character(expected)))
expected <- as.factor(expected)
cm = as.matrix(table(expected, predicted))
precision <- diag(cm) / colSums(cm)
recall <- diag(cm) / rowSums(cm)
f1 <- ifelse(precision + recall == 0, 0, 2 * precision * recall / (precision + recall))
#Assuming that F1 is zero when it's not possible compute it
f1[is.na(f1)] <- 0
#Binary F1 or Multi-class macro-averaged F1
ifelse(nlevels(expected) == 2, f1[positive.class], mean(f1))
}
How do I turn either the f1 metric used for training ("f1") or the one used after predicting on the test set ("f1_score") into a weighted f1 metric?
I am trying to investigate my model with R with machine learning. Training model in general works not well.
# # Logistic regression multiclass
for (i in 1:30) {
# split data into training/test
trainPhyIndex <- createDataPartition(subs_phy$Methane, p=10/17,list = FALSE)
trainingPhy <- subs_phy[trainPhyIndex,]
testingPhy <- subs_phy[-trainPhyIndex,]
# Pre-process predictor values
trainXphy <- trainingPhy[,names(trainingPhy)!= "Methane"]
preProcValuesPhy <- preProcess(x= trainXphy,method = c("center","scale"))
# using boot to avoid over-fitting
fitControlPhyGLMNET <- trainControl(method = "repeatedcv",
number = 10,
repeats = 4,
savePredictions="final",
classProbs = TRUE
)
fit_glmnet_phy <- train (Methane~.,
trainingPhy,
method = "glmnet",
tuneGrid = expand.grid(
.alpha =0.1,
.lambda = 0.00023),
metric = "Accuracy",
trControl = fitControlPhyGLMNET)
pred_glmnet_phy <- predict(fit_glmnet_phy, testingPhy)
# Get the confusion matrix to see accuracy value
u <- union(pred_glmnet_phy,testingPhy$Methane)
t <- table(factor(pred_glmnet_phy, u), factor(testingPhy$Methane, u))
accu_glmnet_phy <- confusionMatrix(t)
# accu_glmnet_phy<-confusionMatrix(pred_glmnet_phy,testingPhy$Methane)
glmnetstatsPhy[(nrow(glmnetstatsPhy)+1),] = accu_glmnet_phy$overall
}
glmnetstatsPhy
The program always stopped on fit_glmnet_phy <- train (Methane~., ..
this command and shows
Metric Accuracy not applicable for regression models
I have no idea about this error
I also attached the type of mathane
enter image description here
Try normalizing the input columns and mapping the output column as factors. This helped me resolve an issue similar to it.
Below is the excerpt from code, which I am trying for german credit dataset.
I am trying to make a generic function for ensemble techniques for my shinydashboard.
The problem is with gbm. The R session will crash if the response variable is not converted to a factor.
If the response variable is converted to a factor then RandomForest will not produce the OOB error rate and confusion matrix in its output component.
Please advise.
The response variable is "default". Before applying the model, the
response variable is treated as,
## load the dataset
data_x = read.csv("credit.csv")
## Preprocessing the dataset
data_x$default <- ifelse(data_x$default == "yes", 1, 0)
##Loading packages
pacman::p_load(shiny,shinydashboard,gbm,
randomForest,ggplot2,ipred,caret,ROCR,dplyr,ModelMetrics)
user defined function
model = function(algo =gbm ,distribution = 'bernoulli',
type = 'response', set ='AUC',n.trees =10000){
## Fit the model
model<- algo(formula = default ~ .,
distribution = distribution,
data = train,
n.trees = n.trees,
cv.fold= 3)
## Generate the prediction on the test set
pred<- predict(object = model,
newdata = test,
n.trees = n.trees,
type = type)
## Generate the test set AUCs using the pred
AUC<- auc(actual = test$default, predicted = pred)
if (set == 'AUC'){
return(AUC)
}
if (set == 'predictions'){
return(pred)
}
if (set == 'model'){
return(model)
}
else
return(NULL)
}
now call different model
List of different models
get_model<- function(algo,type = 'response', ntrees = 10000){
z= model(algo = algo, type= type, set = 'model')
}
Bag_model<- get_model(algo = bagging, type='prob')
RF_model<- get_model(algo = randomForest)
GBM_model<- get_model(algo = gbm)
I am using the class package in order to use the KNN algorithm. I am also using the ROCR package to calculate the AUC value.
knn_one<-knn(train, test, train$Digit, k=1)
To calculate the AUC value for another method, e.g. classification trees, I used these series of commands:
treeTrain_Pred<-predict(Tree_Train, test , type = "prob")[,2]
Pred<-prediction(treeTrain_Pred, test$Digit)
Perf<-performance(Pred, "auc")
Perf#y.values[[1]]
However, when I try
knn_one = predict(knn_one, test, type="prob")[,2]
I get the following error:
Error in UseMethod("predict") :
no applicable method for 'predict' applied to an object of class "factor"
How can I fix this and obtain an AUC value for my KNN function?
There is no predict method for knn models, instead you train and receive predictions as part of a single call. Example on sonar data:
library(mlbench)
data(Sonar)
create data partition:
set.seed(1)
tr_ind <- sample(1:nrow(Sonar), 150)
train <- Sonar[tr_ind,]
test <- Sonar[-tr_ind,]
mod <- class::knn(cl = train$Class,
test = test[,1:60],
train = train[,1:60],
k = 5,
prob = TRUE)
Now the probability of the predictions are in:
attributes(mod)$prob
library(pROC)
roc(test$Class, attributes(mod)$prob)
#output
Call:
roc.default(response = test$Class, predictor = attributes(mod)$prob)
Data: attributes(mod)$prob in 30 controls (test$Class M) < 28 cases (test$Class R).
Area under the curve: 0.4667
plot(roc(test$Class, attributes(mod)$prob),
print.thres = T,
print.auc=T)
lets try with k = 4
mod <- class::knn(cl = train$Class,
test = test[,1:60],
train = train[,1:60],
k = 4,
prob = TRUE)
plot(roc(test$Class, attributes(mod)$prob),
print.thres = T,
print.auc = T,
print.auc.y = 0.2)
I have an imbalanced data, and I want to do stratified cross validation and use precision recall auc as my evaluation metric.
I use prSummary in r package caret with stratified index, and I encounter an error when computing performance.
The following is a sample which can be reproduced. I found that there are only ten sample to compute p-r auc, and because of the imbalanced, there is only one class so that it cannot compute p-r auc. (The reason that I found that only ten sample to compute p-r auc is because I modified the prSummary to force this function to print out the data)
library(randomForest)
library(mlbench)
library(caret)
# Load Dataset
data(Sonar)
dataset <- Sonar
x <- dataset[,1:60]
y <- dataset[,61]
# make this data very imbalance
y[4:length(y)] <- "M"
y <- as.factor(y)
dataset$Class <- y
# create index and indexOut
seed <- 1
set.seed(seed)
folds <- 2
idxAll <- 1:nrow(x)
cvIndex <- createFolds(factor(y), folds, returnTrain = T)
cvIndexOut <- lapply(1:length(cvIndex), function(i){
idxAll[-cvIndex[[i]]]
})
names(cvIndexOut) <- names(cvIndex)
# set the index, indexOut and prSummaryCorrect
control <- trainControl(index = cvIndex, indexOut = cvIndexOut,
method="cv", summaryFunction = prSummary, classProbs = T)
metric <- "AUC"
set.seed(seed)
mtry <- sqrt(ncol(x))
tunegrid <- expand.grid(.mtry=mtry)
rf_default <- train(Class~., data=dataset, method="rf", metric=metric, tuneGrid=tunegrid, trControl=control)
Here is the error message:
Error in ROCR::prediction(y_pred, y_true) :
Number of classes is not equal to 2.
ROCR currently supports only evaluation of binary classification tasks.
I think I found the weird thing...
Even I specified the cross validation index, the summary function(no matter prSummary or other summary function) will still randomly(I am not sure) select ten sample to computing performance.
The way I did is defined a summary function with tryCatch to avoid this error occur.
prSummaryCorrect <- function (data, lev = NULL, model = NULL) {
print(data)
print(dim(data))
library(MLmetrics)
library(PRROC)
if (length(levels(data$obs)) != 2)
stop(levels(data$obs))
if (length(levels(data$obs)) > 2)
stop(paste("Your outcome has", length(levels(data$obs)),
"levels. The prSummary() function isn't appropriate."))
if (!all(levels(data[, "pred"]) == levels(data[, "obs"])))
stop("levels of observed and predicted data do not match")
res <- tryCatch({
auc <- MLmetrics::PRAUC(y_pred = data[, lev[2]], y_true = ifelse(data$obs == lev[2], 1, 0))
}, warning = function(war) {
print(war)
auc <- NA
}, error = function(e){
print(dim(data))
auc <- NA
}, finally = {
print("finally")
auc <- NA
})
c(AUC = res,
Precision = precision.default(data = data$pred, reference = data$obs, relevant = lev[2]),
Recall = recall.default(data = data$pred, reference = data$obs, relevant = lev[2]),
F = F_meas.default(data = data$pred, reference = data$obs, relevant = lev[2]))
}