How to change the threshold for binary classification - r

I've trained a gbm model in R. Since I'm trying to predict a very rare case, I get a lot of false positives. I wanted to change the threshold for a positive ("Good") case from the default to 0.7. Here is my code so far.
modFit.glm.ml <- train(as.factor(ml.training$one_lease)~., data=ml.training, method = "glm")
confusionMatrix(ml.testing$one_lease, predict(modFit.glm.ml, ml.testing), positive = "Good")
This code works but it uses the default cutoff.
Someone mentioned this is possible with the predict function, but I don't know how to do it.

You haven't provided a reproducible example, so here's one using the iris dataset to predict if an iris is of type setosa:
dat <- iris
dat$positive <- as.factor(ifelse(dat$Species == "setosa", "s", "ns"))
library(caret)
mod <- train(positive~Sepal.Length, data=dat, method="glm")
To generate a confusion matrix using a cutoff for predicted probabilities other than 0.5, you can threshold the probabilities returned by the predict function using whatever cutoff you want:
confusionMatrix(table(predict(mod, type="prob")[,"s"] >= 0.25,
dat$positive == "s"))
# Confusion Matrix and Statistics
#
#
# FALSE TRUE
# FALSE 88 3
# TRUE 12 47
#
# Accuracy : 0.9
# 95% CI : (0.8404, 0.9429)
# No Information Rate : 0.6667
# P-Value [Acc > NIR] : 2.439e-11
#
# Kappa : 0.7847
# Mcnemar's Test P-Value : 0.03887
#
# Sensitivity : 0.8800
# Specificity : 0.9400
# Pos Pred Value : 0.9670
# Neg Pred Value : 0.7966
# Prevalence : 0.6667
# Detection Rate : 0.5867
# Detection Prevalence : 0.6067
# Balanced Accuracy : 0.9100
#
# 'Positive' Class : FALSE

You haven't specified which package you want to use, so here's another solution using mlr:
library(mlr)
dat = iris
training.set = seq(1, nrow(iris), by = 2)
test.set = seq(2, nrow(iris), by = 2)
dat$positive = as.factor(ifelse(dat$Species == "setosa", "s", "ns"))
task = makeClassifTask(data = dat, target = "positive")
lrn = makeLearner("classif.glmnet", predict.type = "prob")
mod = train(lrn, task, subset = training.set)
pred = predict(mod, task, subset = test.set)
print(getConfMatrix(pred))
pred = setThreshold(pred, c(s = 1))
print(getConfMatrix(pred))
mlr allows you to set the threshold explicitly with setThreshold -- the advantage is that you can use the resulting predictions with any function that measures performance without having to make sure that the threshold is set correctly.
The mlr tutorial has a whole section on classifier calibration, which can help you figure out the best value for this threshold.

Related

Bootstrap t test in R without reliance on MKinfer package

I would like to get CI for a paired t-test using bootstrap in R. Unfortunately, I dont have privileges to install the MKinfer. Using MKinfer I would (like here: T-test with bootstrap in R):
boot.t.test(
x = iris["Petal.Length"],
y = iris["Sepal.Length"],
alternative = c("two.sided"),
mu = 0,
#paired = TRUE,
conf.level = 0.95,
R = 9999
)
How would I do this for paired data with CI's and p-values not relying on MKinfer (relying on boot would be fine)?
Here is an example using boot using R = 1000 bootstrap replicates
library(boot)
x <- iris$Petal.Length
y <- iris$Sepal.Length
change_in_mean <- function(df, indices) t.test(
df[indices, 1], df[indices, 2], paired = TRUE, var.equal = FALSE)$estimate
model <- boot(
data = cbind(x, y),
statistic = change_in_mean,
R = 1000)
We can calculate the confidence interval of the estimated change in the mean using boot.ci
boot.ci(model, type = "norm")
#BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
#Based on 1000 bootstrap replicates
#
#CALL :
#boot.ci(boot.out = model, type = "norm")
#
#Intervals :
#Level Normal
#95% (-2.262, -1.905 )
#Calculations and Intervals on Original Scale
Note that this is very close to the CI reported by t.test
t.test(x, y, paired = TRUE, var.equal = FALSE)
#
# Paired t-test
#
#data: x and y
#t = -22.813, df = 149, p-value < 2.2e-16
#alternative hypothesis: true difference in means is not equal to 0
#95 percent confidence interval:
# -2.265959 -1.904708
#sample estimates:
#mean of the differences
# -2.085333

R: Caret package: Brier Score

I want to perform a logistic regression with the train() function from the caret package. My model looks something like that:
model <- train(Y ~.,
data = train_data,
family = "binomial",
method = "glmnet")
With the resulting model, I want to make predictions:
pred <- predict(model, newdata = test_data, s = "lambda.min", type = "prob")
Now, I want to evaluate how good the model predictions are in comparison with the actual test data. For this I know how to receive the ROC and AUC. However I am also interested in receiveing the BRIER SCORE. The formula for the Brier Score is almost identical to the MSE.
The problem I am facing, is that the type argument in predict only allows "prob" (or "class" which I am not interested in) which gives the probability of one prediction beeing a ONE (e.g. 0.64) , and the complementing probability of beeing a ZERO (e.g. 0.37). For the Brier Score however, I need One probability estimate for each prediction that contains the information of both (e.g. a value above 0.5 would indicate a 1 and a value below 0.5 would indicate a 0).
I have not found any solution for receiving the Brier Score in the caret package. I am aware that with the package cv.glmnet the predict function allows the argument "response" which would solve my problem. However, for personal preferences I would like to stay with the caretpackage.
Thanks for the help!
If we go by the wiki definition of brier score:
The most common formulation of the Brier score is
where f_t is the probability that was forecast, o_t the actual outcome of the (0 or 1) and N is the number of forecasting instances.
In R, if your label is a factor, then the logistic regression will always predict with respect to the 2nd level, meaning you just calculate the probability and 0/1 with respect to that. For example:
library(caret)
idx = sample(nrow(iris),100)
data = iris
data$Species = factor(ifelse(data$Species=="versicolor","v","o"))
levels(data$Species)
[1] "o" "v"
In this case, o is 0 and v is 1.
train_data = data[idx,]
test_data = data[-idx,]
model <- train(Species ~.,data = train_data,family = "binomial",method = "glmnet")
pred <- predict(model, newdata = test_data)
So we can see the probability of the class:
head(pred)
o v
1 0.8367885 0.16321154
2 0.7970508 0.20294924
3 0.6383656 0.36163437
4 0.9510763 0.04892370
5 0.9370721 0.06292789
To calculate the score:
f_t = pred[,2]
o_t = as.numeric(test_data$Species)-1
mean((f_t - o_t)^2)
[1] 0.32
I use the Brier score to tune my models in caret for binary classification. I ensure that the "positive" class is the second class, which is the default when you label your response "0:1". Then I created this master summary function, based on caret's own suite of summary functions, to return all the metrics I want to see:
BigSummary <- function (data, lev = NULL, model = NULL) {
pr_auc <- try(MLmetrics::PRAUC(data[, lev[2]],
ifelse(data$obs == lev[2], 1, 0)),
silent = TRUE)
brscore <- try(mean((data[, lev[2]] - ifelse(data$obs == lev[2], 1, 0)) ^ 2),
silent = TRUE)
rocObject <- try(pROC::roc(ifelse(data$obs == lev[2], 1, 0), data[, lev[2]],
direction = "<", quiet = TRUE), silent = TRUE)
if (inherits(pr_auc, "try-error")) pr_auc <- NA
if (inherits(brscore, "try-error")) brscore <- NA
rocAUC <- if (inherits(rocObject, "try-error")) {
NA
} else {
rocObject$auc
}
tmp <- unlist(e1071::classAgreement(table(data$obs,
data$pred)))[c("diag", "kappa")]
out <- c(Acc = tmp[[1]],
Kappa = tmp[[2]],
AUCROC = rocAUC,
AUCPR = pr_auc,
Brier = brscore,
Precision = caret:::precision.default(data = data$pred,
reference = data$obs,
relevant = lev[2]),
Recall = caret:::recall.default(data = data$pred,
reference = data$obs,
relevant = lev[2]),
F = caret:::F_meas.default(data = data$pred, reference = data$obs,
relevant = lev[2]))
out
}
Now I can simply pass summaryFunction = BigSummary in trainControl and then metric = "Brier", maximize = FALSE in the train call.

Choosing a class with max probability for every bootstrap sample

i'm trying to run a query creating a for loop for creating bootstraps with data from package rattle.data (weather data with RainTomorrow as the target column). I'm trying to choose a class with maximum probability for every single bootstrap sample, then predicting the class with maximum votes.
With this code I keep getting back an error
if(!require(rpart)) install.packages("rpart")
if(!require(rpart.plot)) install.packages("rpart.plot")
if(!require(caret)) install.packages("caret")
if(!require(rattle.data)) install.packages("rattle.data")
if(!require(tidyverse)) install.packages("tidyverse")
if(!require(ipred)) install.packages("ipred")
if(!require(Metrics)) install.packages("Metrics")
library(rpart)
library(rpart.plot)
library(rattle.data)
library(tidyverse)
library(caret)
library(ipred)
library(Metrics)
set.seed(500)
data <- weather
# creating train and test data
index <- createDataPartition(data$RainTomorrow, p = .6, list = FALSE)
train_data <- data[ index, ]
test_data <- data[-index, ]
## b ukol -> error in for each loop
nBoot = 10 #nr bootstrap samples
#create empty matrix [nr test data x nr bootstrap samples]to store bootstrap predictions
pred = matrix(data = NA, nrow = nrow(test_data), ncol = nBoot)
train_controls = rpart.control(minsplit = 6, maxdepth = 3)
for(b in 1:nBoot){
#create bootstrap sample
index.boot = sample(x=nrow(train_data), replace = T, size = nrow(train_data))
data_boot = train_data[index.boot,]
#fit data for the bootstrap sample
boot.model = rpart(RainTomorrow ~ .,
data =data_boot,
method = "anova",
control = train_controls)
#rpart.plot(boot.model)
#save prediction for bootstrap
pred[,b] = predict(boot.model, newdata= test_data )
}
#calculate prediction as mean of bootstrap predictions
pred.bagged = rowMeans(pred)
print(rmse(actual = test_data$RainTomorrow, predicted = pred.bagged))
but running this query gives me back a warning message:
In Ops.factor(actual, predicted) : ‘-’ not meaningful for factors
and I cannot for the life of me figure the reason (newbie in machine learning).
EDIT: still looking for a valid answer
The error occurs because you are trying to calculate RMSE from a factor:
pred.bagged = rowMeans(pred)
class(pred.bagged)
[1] "numeric"
class(test_data$RainTomorrow)
[1] "factor"
you can convert the factor to numeric, which is what rpart did when you specified method = "anova", and calculate RMSE:
rmse(actual = as.numeric(test_data$RainTomorrow), predicted = pred.bagged)
RMSE is normally used for regression and it doesn't make much sense for a classification model. For classification, you would use method="class" and for evaluating use accuracy, f1 or cohen's kappa, you can see example below with confusionMatrix from caret:
for(b in 1:nBoot){
#create bootstrap sample
index.boot = sample(x=nrow(train_data), replace = T)
data_boot = train_data[index.boot,]
#fit data for the bootstrap sample
boot.model = rpart(RainTomorrow ~ .,
data =data_boot,
method = "class",
control = train_controls)
#rpart.plot(boot.model)
#save prediction for bootstrap
pred[,b] = as.character(predict(boot.model, newdata= test_data ,type="class"))
}
# very crude way to get majority vote
pred.bagged = apply(pred,1,function(i){
names(sort(table(factor(i,levels=c("No","Yes")))))[2]
})
# convert to a factor, same levels as test_data$RainTomorrow
pred.bagged = factor(pred.bagged,levels=c("No","Yes"))
confusionMatrix(,test_data$RainTomorrow)
Confusion Matrix and Statistics
Reference
Prediction No Yes
No 120 0
Yes 0 26
Accuracy : 1
95% CI : (0.9751, 1)
No Information Rate : 0.8219
P-Value [Acc > NIR] : 3.672e-13
Kappa : 1
Mcnemar's Test P-Value : NA
Sensitivity : 1.0000
Specificity : 1.0000
Pos Pred Value : 1.0000
Neg Pred Value : 1.0000
Prevalence : 0.8219
Detection Rate : 0.8219
Detection Prevalence : 0.8219
Balanced Accuracy : 1.0000
'Positive' Class : No

Confidence interval for quantile regression using bootstrap

I am trying to get the five types of bootstrap intervals for linear and quantile regression. I was able to bootstrap and find the 5 boostrap intervals (Quantile,Normal,Basic,Studentized and BCa) for the linear regression using Boot from car and boot.ci from boot. When i tried to do the same for quantile regression using rq from quantreg, it throws up an error. Here is the sample code
Creating the model
library(car)
library(quantreg)
library(boot)
newdata = Prestige[,c(1:4)]
education.c = scale(newdata$education, center=TRUE, scale=FALSE)
prestige.c = scale(newdata$prestige, center=TRUE, scale=FALSE)
women.c = scale(newdata$women, center=TRUE, scale=FALSE)
new.c.vars = cbind(education.c, prestige.c, women.c)
newdata = cbind(newdata, new.c.vars)
names(newdata)[5:7] = c("education.c", "prestige.c", "women.c" )
mod1 = lm(income ~ education.c + prestige.c + women.c, data=newdata)
mod2 = rq(income ~ education.c + prestige.c + women.c, data=newdata)
Booting linear and quantile regression
mod1.boot <- Boot(mod1, R=999)
boot.ci(mod1.boot, level = .95, type = "all")
dat2 <- newdata[5:7]
mod2.boot <- boot.rq(cbind(1,dat2),newdata$income,tau=0.5, R=10000)
boot.ci(mod2.boot, level = .95, type = "all")
Error in if (ncol(boot.out$t) < max(index)) { :
argument is of length zero
1) Why does boot.ci not work for quantile regression
2)Using this solution I got from stackexchange, I was able to find the quantile CI.
Solution for quantile(percentile CI) for rq
t(apply(mod2.boot$B, 2, quantile, c(0.025,0.975)))
how do i obtain other CI for bootstrap (normal, basic, studentized, BCa).
3) Also, my boot.ci command for linear regression produces this warning
Warning message:
In sqrt(tv[, 2L]) : NaNs produced
What does this signify?
Using summary.rq you can calculate boostrap standard errors of model coefficients.
Five boostrap methods (bsmethods) are available (see ?boot.rq).
summary(mod2, se = "boot", bsmethod= "xy")
# Call: rq(formula = income ~ education.c + prestige.c + women.c, data = newdata)
#
# tau: [1] 0.5
#
# Coefficients:
# Value Std. Error t value Pr(>|t|)
# (Intercept) 6542.83599 139.54002 46.88860 0.00000
# education.c 291.57468 117.03314 2.49139 0.01440
# prestige.c 89.68050 22.03406 4.07009 0.00010
# women.c -48.94856 5.79470 -8.44712 0.00000
To calculate bootstrap confidence intervals, you can use the following trick:
mod1.boot <- Boot(mod1, R=999)
set.seed(1234)
boot.ci(mod1.boot, level = .95, type = "all")
dat2 <- newdata[5:7]
set.seed(1234)
mod2.boot <- boot.rq(cbind(1,dat2),newdata$income,tau=0.5, R=10000)
# Create an object with the same structure of mod1.boot
# but with boostrap replicates given by boot.rq
mod3.boot <- mod1.boot
mod3.boot$R <- 10000
mod3.boot$t0 <- coef(mod2)
mod3.boot$t <- mod2.boot$B
boot.ci(mod3.boot, level = .95, type = "all")
# BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
# Based on 10000 bootstrap replicates
#
# CALL :
# boot.ci(boot.out = mod3.boot, type = "all", level = 0.95)
#
# Intervals :
# Level Normal Basic Studentized
# 95% (6293, 6838 ) (6313, 6827 ) (6289, 6941 )
#
# Level Percentile BCa
# 95% (6258, 6772 ) (6275, 6801 )
Thanks for everyone who helped. I was able to figure out the solution myself. I ran a loop calculating the coefficients of the quantile regression and then used boot and boot.ci respectively. Here is the code
Booting commands only, model creation from question
mod3 <- formula(income ~ education.c + prestige.c + women.c)
coefsf <- function(data,ind){
rq(mod3, data=newdata[ind,])$coef
}
boot.mod <- boot(newdata,coefsf,R=10000)
myboot.ci <- list()
for (i in 1:ncol(boot.mod$t)){
myboot.ci[[i]] <- boot.ci(boot.mod, level = .95, type =
c("norm","basic","perc", "bca"),index = i)
}
I did this as I wanted CI on all variables not just the intercept.

R - mlr positive class changed

I'm using R mlr package because it allows me to use multiple classification methods and tune parameters, with the same methods in this package.
But it changed my Positive Class.
In my dataset, I need to predict "HasWriteOff", it has value "1" or "2". "1" is the majority class, much more than the number of "2", which means the class is imbalanced.
I set the Positive class as "2" in makeClassifTask function, but after prediction, when I was checking confusion matrix, it shows Positive Class as "1".
Here is my code:
I set the positive class here
train_task <- makeClassifTask(data=data.frame(train_data), target = "HasWriteOff", positive = "2")
test_task <- makeClassifTask(data=data.frame(test_data), target = "HasWriteOff", positive = "2")
train and predict with XGBoost
set.seed(410)
getParamSet("classif.xgboost")
xg_learner <- makeLearner("classif.xgboost", predict.type = "response")
xg_learner$par.vals <- list(
objective = "binary:logistic",
eval_metric = "error",
nrounds = 250
)
xg_param <- makeParamSet(
makeIntegerParam("nrounds",lower=200,upper=600),
makeIntegerParam("max_depth",lower=3,upper=20),
makeNumericParam("lambda",lower=0.55,upper=0.60),
makeNumericParam("eta", lower = 0.001, upper = 0.5),
makeNumericParam("subsample", lower = 0.10, upper = 0.80),
makeNumericParam("min_child_weight",lower=1,upper=5),
makeNumericParam("colsample_bytree",lower = 0.2,upper = 0.8)
)
rancontrol <- makeTuneControlRandom(maxit = 100L)
cv_xg <- makeResampleDesc("CV",iters = 3L)
xg_tune <- tuneParams(learner = xg_learner, task = train_task, resampling = cv_xg,measures = acc,par.set = xg_param, control = rancontrol)
xg_final <- setHyperPars(learner = xg_learner, par.vals = xg_tune$x)
xgmodel <- mlr::train(xg_final, train_task)
xgpredict <- predict(xgmodel, test_task)
Check Confusion Matrix here
nb_prediction <- xgpredict$data$response
dCM <- confusionMatrix(test_data$HasWriteOff, nb_prediction)
dCM
Output
Accuracy : 0.9954
95% CI : (0.9916, 0.9978)
No Information Rate : 0.9784
P-Value [Acc > NIR] : 5.136e-11
Kappa : 0.8913
Mcnemar's Test P-Value : 1
Sensitivity : 0.9977
Specificity : 0.8936
Pos Pred Value : 0.9977
Neg Pred Value : 0.8936
Prevalence : 0.9784
Detection Rate : 0.9761
Detection Prevalence : 0.9784
Balanced Accuracy : 0.9456
'Positive' Class : 1
As you can see here 'Positive' Class is 1.
I have checked other methods I'm using here, they don't have 'positive' parameter to set.
Do you know how can I really set positive class as the minority class "2"?
I'm trying to see whether by setting the minority class as Positive Class, the Specificity can be higher?
Oh, I just found that, this method should also change the positive class dCM <- confusionMatrix(test_data$HasWriteOff, nb_prediction, positive = "2")
Yesterday I didn't check confusionMatrix function because I thought the positive class should be defined by those methods used before predict.
However, just checked the R document, for confusionMatrix, parameter positive, it is saying:
If there are only two factor levels, the first level will be used as the "positive" result
So yesterday it simply chose the majority class no matter whether I defined positive class before.

Resources