Custom caret metric for "precision at k" by groups - r

What is the proper way to create a custom metric function to use in caret::train that contains an argument and can summarize subsets of the training data?
Imagine we have credit score and loan data and would like to train a model to predict the top lending prospects within different categories of loans (home mortgage, car loan, student loan, etc.) We have a limited amount of money and want to diversify our portfolio, so we want to identify a handful of low-risk loans to make in each category.
As an example, we can use the GermanLoans data from the caret package. In this training data, each loan is classified as either "Good" or "Bad". After rearranging some columns, we have the column Purpose that identifies the type of loan requested.
## Load packages
library(data.table); library(caret); library(xgboost); library(Metrics)
## Load data and convert dependent variable (Class) to factor
data(GermanCredit)
setDT(GermanCredit, keep.rownames=TRUE)
GermanCredit[, `:=`(rn=as.numeric(rn), Class=factor(Class, levels=c("Good", "Bad")))]
## Now we need to collapse a few columns...
## - Columns containing purpose for getting loan
colsPurpose <- names(GermanCredit)[names(GermanCredit) %like% "Purpose."]
## - Replace purpose columns with a single factor column
GermanCredit[, Purpose:=melt(GermanCredit, id.var="rn", measure.vars=colsPurpose)[
value==1][order(rn), factor(sub("Purpose.", "", variable))]]
## - Drop purpose columns
GermanCredit[, colsPurpose:=NULL, with=FALSE]
Now we need to create the custom metric function. Something like precision at k (where k is the number of loans we'd like to make in each category) averaged over groups seems appropriate, but I am open to suggestions. In any case, the function should look something like this:
twoClassGroup <- function (data, lev=NULL, model=NULL, k, ...) {
if(length(levels(data$obs)) > 2)
stop(paste("Your outcome has", length(levels(data$obs)),
"levels. The twoClassGroup() function isn't appropriate."))
if (!all(levels(data$pred) == levels(data$obs)))
stop("levels of observed and predicted data do not match")
[subset the data, probably using data$rowIndex]
[calculate the metrics, based on data$pred and data$obs]
[return a named vector of metrics]
}
Finally, we can train the model.
## Train a model (just an example; may or may not be appropriate for this problem)
creditModel <- train(
Class ~ . - Purpose, data=GermanCredit, method="xgbTree",
trControl=trainControl(
method="cv", number=6, returnResamp="none", summaryFunction=twoClassGroup,
classProbs=TRUE, allowParallel=TRUE, verboseIter=TRUE),
tuneGrid = expand.grid(
nrounds=500, max_depth=6, eta=0.02, gamma=0, colsample_bytree=1, min_child_weight=6),
metric="someCustomMetric", preProc=c("center", "scale"))
## Add predictions
GermanCredit[, `:=`(pred=predict(creditModel, GermanCredit, type="raw"),
prob=predict(creditModel, GermanCredit, type="prob")[[levels(creditModel)[1]]])]
Questions
How do I pass the value of k to twoClassGroup from the train call? Adding it within the main function arguments doesn't work, nor does adding it within trControl or tuneGrid.
How do I subset the data within twoClassGroup in order to calculate the model precision for the top k values within each value of Purpose? The data object within the twoClassGroup function is not the same as the one passed to the original train function.

This attempt mostly works, but I'm hoping someone can share a better method. Rather than passing dt and k arguments from train, they're "hardcoded" in twoClassGroup. Also, the value from Metrics::mapk seems very low, although the resulting model does appear to pick the best loan prospects.
library(Metrics)
twoClassGroup <- function (data, lev=NULL, model=NULL, dt=GermanCredit, k=10) {
if(length(levels(data$obs)) > 2)
stop(paste("Your outcome has", length(levels(data$obs)),
"levels. The twoClassGroup() function isn't appropriate."))
if (!all(levels(data$pred) == levels(data$obs)))
stop("levels of observed and predicted data do not match")
data <- data.table(data, group=dt[data$rowIndex, Purpose])
## You can ignore these extra metrics...
## <-----
sens <- sensitivity(data$pred, data$obs, positive=lev[1])
spec <- specificity(data$pred, data$obs, positive=lev[1])
precision <- posPredValue(data$pred, data$obs)
recall <- sens
Fbeta <- function(precision, recall, beta=1) {
val <- (1+beta^2)*(precision*recall)/(precision*beta^2 + recall)
if(is.nan(val)) val <- 0
return(val)
}
F0.5 <- Fbeta(precision, recall, beta=0.5)
F1 <- Fbeta(precision, recall, beta=1)
F2 <- Fbeta(precision, recall, beta=2)
## ----->
## This is the important one...
mapk <- data[, .(obs=list(obs), pred=list(pred)), by=group][, mapk(k, obs, pred)]
return(c(sensitivity=sens, specificity=spec, F0.5=F0.5, F1=F1, F2=F2, mapk=mapk))
}
In the train call from the original post, the value of metric would be "mapk" rather than "someCustomMetric".

Related

How to Create a loop (when levels do not overlap the reference)

I have written some code in R. This code takes some data and splits it into a training set and a test set. Then, I fit a "survival random forest" model on the training set. After, I use the model to predict observations within the test set.
Due to the type of problem I am dealing with ("survival analysis"), a confusion matrix has to be made for each "unique time" (inside the file "unique.death.time"). For each confusion matrix made for each unique time, I am interested in the corresponding "sensitivity" value (e.g. sensitivity_1001, sensitivity_2005, etc.). I am trying to get all these sensitivity values : I would like to make a plot with them (vs unique death times) and determine the average sensitivity value.
In order to do this, I need to repeatedly calculate the sensitivity for each time point in "unique.death.times". I tried doing this manually and it is taking a long time.
Could someone please show me how to do this with a "loop"?
I have posted my code below:
#load libraries
library(survival)
library(data.table)
library(pec)
library(ranger)
library(caret)
#load data
data(cost)
#split data into train and test
ind <- sample(1:nrow(cost),round(nrow(cost) * 0.7,0))
cost_train <- cost[ind,]
cost_test <- cost[-ind,]
#fit survival random forest model
ranger_fit <- ranger(Surv(time, status) ~ .,
data = cost_train,
mtry = 3,
verbose = TRUE,
write.forest=TRUE,
num.trees= 1000,
importance = 'permutation')
#optional: plot training results
plot(ranger_fit$unique.death.times, ranger_fit$survival[1,], type = 'l', col = 'red') # for first observation
lines(ranger_fit$unique.death.times, ranger_fit$survival[21,], type = 'l', col = 'blue') # for twenty first observation
#predict observations test set using the survival random forest model
ranger_preds <- predict(ranger_fit, cost_test, type = 'response')$survival
ranger_preds <- data.table(ranger_preds)
colnames(ranger_preds) <- as.character(ranger_fit$unique.death.times)
From here, another user (Justin Singh) from a previous post (R: how to repeatedly "loop" the results from a function?) suggested how to create a loop:
sensitivity <- list()
for (time in names(ranger_preds)) {
prediction <- ranger_preds[which(names(ranger_preds) == time)] > 0.5
real <- cost_test$time >= as.numeric(time)
confusion <- confusionMatrix(as.factor(prediction), as.factor(real), positive = 'TRUE')
sensitivity[as.character(i)] <- confusion$byclass[1]
}
But due to some of the observations used in this loop, I get the following error:
Error in confusionMatrix.default(as.factor(prediction), as.factor(real), :
The data must contain some levels that overlap the reference.
Does anyone know how to fix this?
Thanks
Certain values in prediction and/or real have only 1 unique value in them. Make sure the levels of the factors are the same.
sapply(names(ranger_preds), function(x) {
prediction <- factor(ranger_preds[[x]] > 0.5, levels = c(TRUE, FALSE))
real <- factor(cost_test$time >= as.numeric(x), levels = c(TRUE, FALSE))
confusion <- caret::confusionMatrix(prediction, real, positive = 'TRUE')
confusion$byClass[1]
}, USE.NAMES = FALSE) -> result
result

How to predict in kknn function? library(kknn)

I try to use kknn + loop to create a leave-out-one cross validation for a model, and compare that with train.kknn.
I have split the data into two parts: training (80% data), and test (20% data). In the training data, I exclude one point in the loop to manually create LOOCV.
I think something gets wrong in predict(knn.fit, data.test). I have tried to find how to predict in kknn through the kknn package instruction and online but all the examples are "summary(model)" and "table(validation...)" rather than the prediction on a separate test data. The code predict(model, dataset) works successfully in train.kknn function, so I thought I could use the similar arguments in kknn.
I am not sure if there is such a prediction function in kknn. If yes, what arguments should I give?
Look forward to your suggestion. Thank you.
library(kknn)
for (i in 1:nrow(data.train)) {
train.data <- data.train[-i,]
validation.data <- data.train[i,]
knn.fit <- kknn(as.factor(R1)~., train.data, validation.data, k = 40,
kernel = "rectangular", scale = TRUE)
# train.data + validation.data is the 80% data I split.
}
pred.knn <- predict(knn.fit, data.test) # data.test is 20% data.
Here is the error message:
Error in switch(type, raw = object$fit, prob = object$prob,
stop("invalid type for prediction")) : EXPR must be a length 1
vector
Actually I try to compare train.kknn and kknn+loop to compare the results of the leave-out-one CV. I have two more questions:
1) in kknn: is it possible to use another set of data as test data to see the knn.fit prediction?
2) in train.kknn: I split the data and use 80% of the whole data and intend to use the rest 20% for prediction. Is it an correct common practice?
2) Or should I just use the original data (the whole data set) for train.kknn, and create a loop: data[-i,] for training, data[i,] for validation in kknn? So they will be the counterparts?
I find that if I use the training data in the train.kknn function and use prediction on test data set, the best k and kernel are selected and directly used in generating the predicted value based on the test dataset.
In contrast, if I use kknn function and build a loop of different k values, the model generates the corresponding prediction results based on
the test data set each time the k value is changed. Finally, in kknn + loop, the best k is selected based on the best actual prediction accuracy rate of test data. In short, the best k train.kknn selected may not work best on test data.
Thank you.
For objects returned by kknn, predict gives the predicted value or the predicted probabilities of R1 for the single row contained in validation.data:
predict(knn.fit)
predict(knn.fit, type="prob")
The predict command also works on objects returned by train.knn.
For example:
train.kknn.fit <- train.kknn(as.factor(R1)~., data.train, ks = 10,
kernel = "rectangular", scale = TRUE)
class(train.kknn.fit)
# [1] "train.kknn" "kknn"
pred.train.kknn <- predict(train.kknn.fit, data.test)
table(pred.train.kknn, as.factor(data.test$R1))
The train.kknn command implements a leave-one-out method very close to the loop developed by #vcai01. See the following example:
set.seed(43210)
n <- 500
data.train <- data.frame(R1=rbinom(n,1,0.5), matrix(rnorm(n*10), ncol=10))
library(kknn)
pred.kknn <- array(0, nrow(data.train))
for (i in 1:nrow(data.train)) {
train.data <- data.train[-i,]
validation.data <- data.train[i,]
knn.fit <- kknn(as.factor(R1)~., train.data, validation.data, k = 40,
kernel = "rectangular", scale = TRUE)
pred.kknn[i] <- predict(knn.fit)
}
knn.fit <- train.kknn(as.factor(R1)~., data.train, ks = 40,
kernel = "rectangular", scale = TRUE)
pred.train.kknn <- predict(knn.fit, data.train)
table(pred.train.kknn, pred.kknn)
# pred.kknn
# pred.train.kknn 1 2
# 0 374 14
# 1 9 103

Excluding ID field when fitting model in R

I have a simple Random Forest model I have created and tested in R. For now I have excluded an internal company ID from my training/testing data frames. Is there a way in R that I could include this column in my data and have the training/execution of my model ignore the field?
I obviously would not want the model to try and incorporate it as a variable, but upon an export of the data with a column added of the predicted outcome, I would need that internal id to tie back in other customer data so I know what customers have been categorized as
I am just using the out of the box random forest function from the randomForest library
#divide data into training and test sets
set.seed(3)
id<-sample(2,nrow(Churn_Model_Data_v2),prob=c(0.7,0.3),replace = TRUE)
churn_train<-Churn_Model_Data_v2[id==1,]
churn_test<-Churn_Model_Data_v2[id==2,]
#changes Churn data 1/2 to a factor for model
Churn_Model_Data_v2$`Churn` <- as.factor(Churn_Model_Data_v2$`Churn`)
churn_train$`Churn` <- as.factor(churn_train$`Churn`)
#churn_test$`Churn` <- as.factor(churn_test$`Churn`)
bestmtry <- tuneRF(churn_train,churn_train$`Churn`, stepFactor = 1.2,
improve =0.01, trace=T, plot=T )
#creates model based on training data, views model
churn_forest <- randomForest(`Churn`~. , data= churn_train )
churn_forest
#shows us what variables are most important
importance(churn_forest)
varImpPlot(churn_forest)
#predicts churn diagnosis on test data
predict_churn <- predict(churn_forest, newdata = churn_test, type="class")
predict_churn
A simple example of excluding a particular column or set of columns is as follows
library(MASS)
temp<-petrol
randomForest(No ~ .,data = temp[, !(colnames(temp) %in% c("SG"))]) # One Way
randomForest(No ~ .-SG,data = temp) #Another way with similar result
This method of exclusion is commonly valid across other fuctions/alogorithms in R too.

PLS in R: Predicting new observations returns Fitted values instead

In the past few days I have developed multiple PLS models in R for spectral data (wavebands as explanatory variables) and various vegetation parameters (as individual response variables). In total, the dataset comprises of 56. The first 28 (training set) have been used for model calibration, now all I want to do is to predict the response values for the remaining 28 observations in the tesset. For some reason, however, R keeps on the returning the fitted values of the calibration set for a given number of components rather than predictions for the independent test set. Here is what the model looks like in short.
# first simulate some data
set.seed(123)
bands=101
data <- data.frame(matrix(runif(56*bands),ncol=bands))
colnames(data) <- paste0(1:bands)
data$height <- rpois(56,10)
data$fbm <- rpois(56,10)
data$nitrogen <- rpois(56,10)
data$carbon <- rpois(56,10)
data$chl <- rpois(56,10)
data$ID <- 1:56
data <- as.data.frame(data)
caldata <- data[1:28,] # define model training set
valdata <- data[29:56,] # define model testing set
# define explanatory variables (x)
spectra <- caldata[,1:101]
# build PLS model using training data only
library(pls)
refl.pls <- plsr(height ~ spectra, data = caldata, ncomp = 10, validation =
"LOO", jackknife = TRUE)
It was then identified that a model comprising of 3 components yielded the best performance without over-fitting. Hence, the following command was used to predict the values of the 28 observations in the testing set using the above calibrated PLS model with 3 components:
predict(refl.pls, ncomp = 3, newdata = valdata)
Sensible as the output may seem, I soon discovered that all this piece of code generates are the fitted values of the PLS model for the calibration/training data, rather than predictions. I discovered this because the below code, in which newdata = is omitted, yields identical results.
predict(refl.pls, ncomp = 3)
Surely something must be going wrong, although I cannot seem to find out what specifically is. Is there someone out there who can, and is willing to help me move in the right direction?
I think the problem is with the nature of the input data. Looking at ?plsr and str(yarn) that goes with the example, plsr requires a very specific data frame that I find tricky to work with. The input data frame should have a matrix as one of its elements (in your case, the spectral data). I think the following works correctly (note I changed the size of the training set so that it wasn't half the original data, for troubleshooting):
library("pls")
set.seed(123)
bands=101
spectra = matrix(runif(56*bands),ncol=bands)
DF <- data.frame(spectra = I(spectra),
height = rpois(56,10),
fbm = rpois(56,10),
nitrogen = rpois(56,10),
carbon = rpois(56,10),
chl = rpois(56,10),
ID = 1:56)
class(DF$spectra) <- "matrix" # just to be certain, it was "AsIs"
str(DF)
DF$train <- rep(FALSE, 56)
DF$train[1:20] <- TRUE
refl.pls <- plsr(height ~ spectra, data = DF, ncomp = 10, validation =
"LOO", jackknife = TRUE, subset = train)
res <- predict(refl.pls, ncomp = 3, newdata = DF[!DF$train,])
Note that I got the spectral data into the data frame as a matrix by protecting it with I which equates to AsIs. There might be a more standard way to do this, but it works. As I said, to me a matrix inside of a data frame is not completely intuitive or easy to grok.
As to why your version didn't work quite right, I think the best explanation is that everything needs to be in the one data frame you pass to plsr for the data sources to be completely unambiguous.

Setting Random seeds do not affect classification methods C5.0 and ctree

I want to compare between two different classification methods, namely ctree and C5.0 in the libraries partyand c50 respectively, the comparison is to test their sensitivity to the initial start points. The test should be carried 30 times for each time the number of wrong classified items are calculated and stored in a vector then by using t-test I hope to see if they are really different or not.
library("foreign"); # for read.arff
library("party") # for ctree
library("C50") # for C5.0
trainTestSplit <- function(data, trainPercentage){
newData <- list();
all <- nrow(data);
splitPoint <- floor(all * trainPercentage);
newData$train <- data[1:splitPoint, ];
newData$test <- data[splitPoint:all, ];
return (newData);
}
ctreeErrorCount <- function(st,ss){
set.seed(ss);
model <- ctree(Class ~ ., data=st$train);
class <- st$test$Class;
st$test$Class <- NULL;
pre = predict(model, newdata=st$test, type="response");
errors <- length(which(class != pre)); # counting number of miss classified items
return(errors);
}
C50ErrorCount <- function(st,ss){
model <- C5.0(Class ~ ., data=st$train, seed=ss);
class <- st$test$Class;
pre = predict(model, newdata=st$test, type="class");
errors <- length(which(class != pre)); # counting number of miss classified items
return(errors);
}
compare <- function(n = 30){
data <- read.arff(file.choose());
set.seed(100);
errors = list(ctree = c(), c50 = c());
seeds <- floor(abs(rnorm(n) * 10000));
for(i in 1:n){
splitData <- trainTestSplit(data, 0.66);
errors$ctree[i] <- ctreeErrorCount(splitData, seeds[i]);
errors$c50[i] <- C50ErrorCount(splitData, seeds[i]);
}
cat("\n\n");
cat("============= ctree Vs C5.0 =================\n");
cat(paste(errors$ctree, " ", errors$c50, "\n"))
tt <- t.test(errors$ctree, errors$c50);
print(tt);
}
The program shown is supposedly doing the job of comparison, but because of the number of errors does not change in the vectors then the t.test function produces an error. I used iris inside R (but changing class to Class) and Winchester breast cancer data which can be downloaded here to test it but any data can be used as long as it has Class attribute
But I get in to the problem that the result of both methods remain constant and not changes while I am changing the random seed, theoretically ,as described in their documentation,both of the functions use random seeds, ctree uses set.seed(x) while C5.0 uses an argument called seed to set seed, unfortunatly I can not find the effect.
Could you please tell me how to control initials of these functions
ctrees does only depend on a random seed in the case where you configure it to use a random selection of input variables (ie that mtry > 0 within ctree_control). See http://cran.r-project.org/web/packages/party/party.pdf (p. 11)
In regards to C5.0-trees the seed is used this way:
ctrl = C5.0Control(sample=0.5, seed=ss);
model <- C5.0(Class ~ ., data=st$train, control = ctrl);
Notice that the seed is used to select a sample of the data, not within the algoritm itself. See http://cran.r-project.org/web/packages/C50/C50.pdf (p. 5)

Resources